[CVPR2021] Look before you leap: learning landmark features for one-stage visual grounding.
☆50Aug 31, 2021Updated 4 years ago
Alternatives and similar repositories for LBYLNet
Users that are interested in LBYLNet are comparing it to the libraries listed below
Sorting:
- Improving One-stage Visual Grounding by Recursive Sub-query Construction, ECCV 2020☆90Sep 30, 2021Updated 4 years ago
- Official codebase for "Ref-NMS: Breaking Proposal Bottlenecks in Two-Stage Referring Expression Grounding"☆22Dec 20, 2020Updated 5 years ago
- Adaptive Reconstruction Network for Weakly Supervised Referring Expression Grounding☆33Aug 29, 2019Updated 6 years ago
- The official PyTorch code for "Relation-aware Instance Refinement for Weakly Supervised Visual Grounding" accepted by CVPR2021☆27Oct 9, 2021Updated 4 years ago
- Improving Visual Grounding with Visual-Linguistic Verification and Iterative Reasoning, CVPR 2022☆97Dec 2, 2022Updated 3 years ago
- ☆41Jun 3, 2022Updated 3 years ago
- iterative shrinking for referring expression grounding using deep reinforcement learning☆14Nov 27, 2021Updated 4 years ago
- Preliminary code for reviewers☆13Mar 30, 2021Updated 4 years ago
- Implementation for MAF: Multimodal Alignment Framework☆46Nov 25, 2020Updated 5 years ago
- ☆11Apr 25, 2024Updated last year
- An unofficial pytorch implementation of "TransVG: End-to-End Visual Grounding with Transformers".☆52Jun 7, 2021Updated 4 years ago
- Graph-Structured Referring Expressions Reasoning in The Wild, In CVPR 2020, Oral.☆116Aug 10, 2020Updated 5 years ago
- The official implementation of CVPR 2021 Paper: Improving Weakly Supervised Visual Grounding by Contrastive Knowledge Distillation.☆12Oct 15, 2021Updated 4 years ago
- ☆196Feb 27, 2024Updated 2 years ago
- ☆14Jul 13, 2021Updated 4 years ago
- A curated list of research papers in Referring Expression Comprehension (REC)☆46May 13, 2021Updated 4 years ago
- AAAI2020-The official implementation of "Learning Cross-modal Context Graph for Visual Grounding"☆58Oct 25, 2021Updated 4 years ago
- Third place of 2021 IEEE GRSS Data Fusion Contest: Track MSD☆10Mar 31, 2021Updated 4 years ago
- Official Implementation for paper "Referring Transformer: A One-step Approach to Multi-task Visual Grounding" Neurips 2021☆67May 26, 2022Updated 3 years ago
- ☆20Oct 21, 2022Updated 3 years ago
- [CVPR2022] SVIP: Sequence VerIfication for Procedures in Videos☆24Feb 24, 2023Updated 3 years ago
- ☆12Nov 17, 2019Updated 6 years ago
- MLLM-Tool: A Multimodal Large Language Model For Tool Agent Learning☆141Oct 10, 2025Updated 5 months ago
- the implementation of EMNLP 2020 "Learning to Contrast the Counterfactual Samples for Robust Visual Question Answering"☆15Sep 9, 2021Updated 4 years ago
- ☆12Mar 8, 2021Updated 5 years ago
- ☆10Jan 9, 2025Updated last year
- ☆14Dec 9, 2023Updated 2 years ago
- Flickr30K Entities Dataset☆183Dec 23, 2018Updated 7 years ago
- Repo for ICCV 2021 paper: Beyond Question-Based Biases: Assessing Multimodal Shortcut Learning in Visual Question Answering☆29Jul 1, 2024Updated last year
- [CVPR 2023] Code for "Improving Visual Grounding by Encouraging Consistent Gradient-based Explanations"☆19Oct 10, 2023Updated 2 years ago
- Code for our IJCAI2020 paper: Overcoming Language Priors with Self-supervised Learning for Visual Question Answering☆52Aug 21, 2020Updated 5 years ago
- MAttNet: Modular Attention Network for Referring Expression Comprehension☆298Nov 29, 2022Updated 3 years ago
- Free-form Description-guided 3D Visual Graph Networks for Object Grounding in Point Cloud☆17Jun 23, 2022Updated 3 years ago
- ☆14Nov 28, 2021Updated 4 years ago
- ☆79Oct 8, 2022Updated 3 years ago
- Code release for Hu et al., Language-Conditioned Graph Networks for Relational Reasoning. in ICCV, 2019☆92Aug 9, 2019Updated 6 years ago
- ☆40Nov 29, 2022Updated 3 years ago
- Cross-media Structured Common Space for Multimedia Event Extraction (ACL2020)☆78Oct 3, 2023Updated 2 years ago
- Temporal Moment(Action) Localization via Language / Temporal Language Grounding / Video Moment Retrieval☆100Jan 23, 2022Updated 4 years ago