yangli18 / VLTVGLinks
Improving Visual Grounding with Visual-Linguistic Verification and Iterative Reasoning, CVPR 2022
☆97Updated 2 years ago
Alternatives and similar repositories for VLTVG
Users that are interested in VLTVG are comparing it to the libraries listed below
Sorting:
- SeqTR: A Simple yet Universal Network for Visual Grounding☆141Updated 9 months ago
- A lightweight codebase for referring expression comprehension and segmentation☆55Updated 3 years ago
- ☆187Updated last year
- ☆40Updated 3 years ago
- Official Implementation for paper "Referring Transformer: A One-step Approach to Multi-task Visual Grounding" Neurips 2021☆68Updated 3 years ago
- [TMM 2023] Self-paced Curriculum Adapting of CLIP for Visual Grounding.☆130Updated 6 months ago
- An unofficial pytorch implementation of "TransVG: End-to-End Visual Grounding with Transformers".☆52Updated 4 years ago
- [CVPR 2022] Pseudo-Q: Generating Pseudo Language Queries for Visual Grounding☆150Updated last year
- ☆36Updated 2 years ago
- ☆86Updated 3 years ago
- [ICCV 2021] Official implementation of the paper "TRAR: Routing the Attention Spans in Transformers for Visual Question Answering"☆66Updated 3 years ago
- ☆183Updated 2 years ago
- ☆21Updated last year
- Referring Video Object Segmentation / Multi-Object Tracking Repo☆88Updated 2 years ago
- CVPR2022 - Language-Bridged Spatial-Temporal Interaction for Referring Video Object Segmentation☆23Updated 2 years ago
- A new framework for open-vocabulary object detection, based on maskrcnn-benchmark☆240Updated 2 years ago
- Code for Referring Image Segmentation via Cross-Modal Progressive Comprehension, CVPR2020.☆63Updated 4 years ago
- Official repository for "Vita-CLIP: Video and text adaptive CLIP via Multimodal Prompting" [CVPR 2023]☆121Updated 2 years ago
- Improving One-stage Visual Grounding by Recursive Sub-query Construction, ECCV 2020☆86Updated 3 years ago
- [NeurIPS 2022] Embracing Consistency: A One-Stage Approach for Spatio-Temporal Video Grounding☆52Updated last year
- CVPR 2023 Accepted Paper HOICLIP: Efficient Knowledge Transfer for HOI Detection with Vision-Language Models☆68Updated last year
- [CVPR2021] Look before you leap: learning landmark features for one-stage visual grounding.☆48Updated 3 years ago
- ☆211Updated 2 years ago
- [CVPR2023] The code for 《Position-guided Text Prompt for Vision-Language Pre-training》☆152Updated 2 years ago
- ☆94Updated last year
- [AAAI 2022] Negative Sample Matters: A Renaissance of Metric Learning for Temporal Grounding☆90Updated 2 years ago
- Official Codes for Fine-Grained Visual Prompting, NeurIPS 2023☆54Updated last year
- What is Where by Looking: Weakly-Supervised Open-World Phrase-Grounding without Text Inputs☆25Updated 2 years ago
- [TPAMI 2024] This is the official Pytorch code for our paper "Context Disentangling and Prototype Inheriting for Robust Visual Grounding"…☆18Updated 3 months ago
- ☆23Updated 3 years ago