yashkant / concat-vqa
Official code for the paper "Contrast and Classify: Training Robust VQA Models" published at ICCV, 2021
☆19Updated 3 years ago
Alternatives and similar repositories for concat-vqa:
Users that are interested in concat-vqa are comparing it to the libraries listed below
- Data Release for VALUE Benchmark☆31Updated 3 years ago
- ☆26Updated 3 years ago
- ☆13Updated 3 years ago
- CVPR 2022 (Oral) Pytorch Code for Unsupervised Vision-and-Language Pre-training via Retrieval-based Multi-Granular Alignment☆22Updated 3 years ago
- A pytorch implemetation of data augmentation method for visual question answering☆21Updated last year
- [EMNLP 2020] What is More Likely to Happen Next? Video-and-Language Future Event Prediction☆48Updated 2 years ago
- Code for WACV 2021 Paper "Meta Module Network for Compositional Visual Reasoning"☆43Updated 3 years ago
- Code for Greedy Gradient Ensemble for Visual Question Answering (ICCV 2021, Oral)☆26Updated 3 years ago
- [NeurIPS 2021] Introspective Distillation for Robust Question Answering☆13Updated 3 years ago
- Code for the CVPR 2020 oral paper: Weakly Supervised Visual Semantic Parsing☆35Updated 2 years ago
- ☆28Updated 2 years ago
- MLPs for Vision and Langauge Modeling (Coming Soon)☆27Updated 3 years ago
- A collections of papers about VQA-CP datasets and their results☆38Updated 3 years ago
- Pytorch version of DeCEMBERT: Learning from Noisy Instructional Videos via Dense Captions and Entropy Minimization (NAACL 2021)☆17Updated 2 years ago
- Unpaired Image Captioning☆35Updated 4 years ago
- Starter Code for VALUE benchmark☆80Updated 2 years ago
- Repo for ICCV 2021 paper: Beyond Question-Based Biases: Assessing Multimodal Shortcut Learning in Visual Question Answering☆26Updated 9 months ago
- Counterfactual Samples Synthesizing for Robust VQA☆78Updated 2 years ago
- A video retrieval dataset How2R and a video QA dataset How2QA☆24Updated 4 years ago
- Shows visual grounding methods can be right for the wrong reasons! (ACL 2020)☆23Updated 4 years ago
- ☆63Updated 3 years ago
- Human-like Controllable Image Captioning with Verb-specific Semantic Roles.☆36Updated 3 years ago
- Dataset and starting code for visual entailment dataset☆109Updated 3 years ago
- The SVO-Probes Dataset for Verb Understanding☆31Updated 3 years ago
- ROSITA: Enhancing Vision-and-Language Semantic Alignments via Cross- and Intra-modal Knowledge Integration☆56Updated last year
- Learning phrase grounding from captioned images through InfoNCE bound on mutual information☆73Updated 4 years ago
- Code release for Learning to Assemble Neural Module Tree Networks for Visual Grounding (ICCV 2019)☆39Updated 5 years ago
- CVPR 2021 Official Pytorch Code for UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training☆34Updated 3 years ago
- BottomUpTopDown VQA model with question-type debiasing☆22Updated 5 years ago
- Pytorch implementation for our NeurIPS 2019 paper "TAB-VCR: Tags and Attributes based VCR Baselines" https://arxiv.org/abs/1910.14671☆18Updated 3 years ago