LeapLabTHU / Pseudo-QLinks
[CVPR 2022] Pseudo-Q: Generating Pseudo Language Queries for Visual Grounding
☆152Updated last year
Alternatives and similar repositories for Pseudo-Q
Users that are interested in Pseudo-Q are comparing it to the libraries listed below
Sorting:
- A lightweight codebase for referring expression comprehension and segmentation☆56Updated 3 years ago
- ☆87Updated 3 years ago
- SeqTR: A Simple yet Universal Network for Visual Grounding☆144Updated last year
- Official Implementation for paper "Referring Transformer: A One-step Approach to Multi-task Visual Grounding" Neurips 2021☆69Updated 3 years ago
- Improving Visual Grounding with Visual-Linguistic Verification and Iterative Reasoning, CVPR 2022☆96Updated 3 years ago
- ☆41Updated 3 years ago
- [CVPR2023] The code for 《Position-guided Text Prompt for Vision-Language Pre-training》☆152Updated 2 years ago
- ☆38Updated 2 years ago
- [ICCV 2021] Official implementation of the paper "TRAR: Routing the Attention Spans in Transformers for Visual Question Answering"☆68Updated 4 years ago
- [NeurIPS 2022] Embracing Consistency: A One-Stage Approach for Spatio-Temporal Video Grounding☆53Updated last year
- Source code for EMNLP 2022 paper “PEVL: Position-enhanced Pre-training and Prompt Tuning for Vision-language Models”☆48Updated 3 years ago
- Code for the paper: "SuS-X: Training-Free Name-Only Transfer of Vision-Language Models" [ICCV'23]☆105Updated 2 years ago
- ☆30Updated last year
- ☆194Updated last year
- ☆60Updated 7 months ago
- ☆20Updated last year
- UniTAB: Unifying Text and Box Outputs for Grounded VL Modeling, ECCV 2022 (Oral Presentation)☆89Updated 2 years ago
- [NeurIPS 2022 Spotlight] RLIP: Relational Language-Image Pre-training and a series of other methods to solve HOI detection and Scene Grap…☆78Updated last year
- An unofficial pytorch implementation of "TransVG: End-to-End Visual Grounding with Transformers".☆52Updated 4 years ago
- ☆94Updated 2 years ago
- [CVPR 2022] Visual Abductive Reasoning☆123Updated last year
- This repo is the official implementation of UPL (Unsupervised Prompt Learning for Vision-Language Models).☆117Updated 3 years ago
- Official repository for "Vita-CLIP: Video and text adaptive CLIP via Multimodal Prompting" [CVPR 2023]☆127Updated 2 years ago
- Winner solution to Generic Event Boundary Captioning task in LOVEU Challenge (CVPR 2023 workshop)☆30Updated last year
- [ICCV 2023] Prompt-aligned Gradient for Prompt Tuning☆167Updated 2 years ago
- Colorful Prompt Tuning for Pre-trained Vision-Language Models☆49Updated 3 years ago
- Awesome Vision-Language Pretraining Papers☆37Updated 10 months ago
- [ACM MM 22] Correspondence Matters for Video Referring Expression Comprehension☆15Updated 3 years ago
- ICLR 2023 DeCap: Decoding CLIP Latents for Zero-shot Captioning☆137Updated 2 years ago
- ☆97Updated 3 years ago