yanxinzju / CSS-VQAView external linksLinks
Counterfactual Samples Synthesizing for Robust VQA
☆79Nov 24, 2022Updated 3 years ago
Alternatives and similar repositories for CSS-VQA
Users that are interested in CSS-VQA are comparing it to the libraries listed below
Sorting:
- BottomUpTopDown VQA model with question-type debiasing☆22Oct 6, 2019Updated 6 years ago
- Code for NeurIPS 2019 paper ``Self-Critical Reasoning for Robust Visual Question Answering''☆41Sep 9, 2019Updated 6 years ago
- A collections of papers about VQA-CP datasets and their results☆41Mar 18, 2022Updated 3 years ago
- [CVPR 2021] Counterfactual VQA: A Cause-Effect Look at Language Bias☆130Dec 15, 2021Updated 4 years ago
- NeurIPS 2019 Paper: RUBi : Reducing Unimodal Biases for Visual Question Answering☆65Mar 29, 2021Updated 4 years ago
- Code for our IJCAI2020 paper: Overcoming Language Priors with Self-supervised Learning for Visual Question Answering☆52Aug 21, 2020Updated 5 years ago
- [NeurIPS 2021] Introspective Distillation for Robust Question Answering☆13Dec 7, 2021Updated 4 years ago
- the implementation of EMNLP 2020 "Learning to Contrast the Counterfactual Samples for Robust Visual Question Answering"☆15Sep 9, 2021Updated 4 years ago
- Repo for ICCV 2021 paper: Beyond Question-Based Biases: Assessing Multimodal Shortcut Learning in Visual Question Answering☆28Jul 1, 2024Updated last year
- GQA-OOD is a new dataset and benchmark for the evaluation of VQA models in OOD (out of distribution) settings.☆32Mar 1, 2021Updated 4 years ago
- Code for Greedy Gradient Ensemble for Visual Question Answering (ICCV 2021, Oral)☆27Mar 28, 2022Updated 3 years ago
- A pytorch implemetation of data augmentation method for visual question answering☆21May 25, 2023Updated 2 years ago
- Shows visual grounding methods can be right for the wrong reasons! (ACL 2020)☆23Jun 26, 2020Updated 5 years ago
- ☆13Feb 14, 2022Updated 4 years ago
- [ECCV2022] Rethinking Data Augmentation for Robust Visual Question Answering☆13Nov 23, 2022Updated 3 years ago
- Deep Modular Co-Attention Networks for Visual Question Answering☆458Dec 16, 2020Updated 5 years ago
- ☆12Mar 8, 2021Updated 4 years ago
- Re-implementation for 'R-VQA: Learning Visual Relation Facts with Semantic Attention for Visual Question Answering'.☆12Apr 11, 2019Updated 6 years ago
- MUREL (CVPR 2019), a multimodal relational reasoning module for VQA☆195Feb 9, 2020Updated 6 years ago
- A lightweight, scalable, and general framework for visual question answering research☆330Sep 3, 2021Updated 4 years ago
- Official code for the paper "Contrast and Classify: Training Robust VQA Models" published at ICCV, 2021☆19Jul 27, 2021Updated 4 years ago
- Grid features pre-training code for visual question answering☆269Sep 17, 2021Updated 4 years ago
- Implementation of Mutan+ArticleNet on OKVQA☆10Jan 11, 2021Updated 5 years ago
- ☆79Oct 8, 2022Updated 3 years ago
- ☆18May 31, 2023Updated 2 years ago
- ☆12Jun 17, 2020Updated 5 years ago
- [CVPR 2020] The official pytorch implementation of ``Visual Commonsense R-CNN''☆359May 2, 2021Updated 4 years ago
- Code release for Hu et al., Language-Conditioned Graph Networks for Relational Reasoning. in ICCV, 2019☆92Aug 9, 2019Updated 6 years ago
- Implementation for CVPR 2020 Paper "Two Causal Principles for Improving Visual Dialog"☆31Feb 19, 2023Updated 2 years ago
- ☆38Jan 20, 2023Updated 3 years ago
- Research Code for ICCV 2019 paper "Relation-aware Graph Attention Network for Visual Question Answering"☆187Apr 15, 2021Updated 4 years ago
- Implementation for the paper "Hierarchical Conditional Relation Networks for Video Question Answering" (Le et al., CVPR 2020, Oral)☆134Jul 25, 2024Updated last year
- Code for "Counterfactual Variable Control for Robust and Interpretable Question Answering"☆14Oct 13, 2020Updated 5 years ago
- MLPs for Vision and Langauge Modeling (Coming Soon)☆27Dec 9, 2021Updated 4 years ago
- CVPR 2022 (Oral) Pytorch Code for Unsupervised Vision-and-Language Pre-training via Retrieval-based Multi-Granular Alignment☆22Apr 15, 2022Updated 3 years ago
- ☆14May 10, 2021Updated 4 years ago
- Code release for Learning to Assemble Neural Module Tree Networks for Visual Grounding (ICCV 2019)☆39Nov 23, 2019Updated 6 years ago
- Code for our ACL2021 paper: "Check It Again: Progressive Visual Question Answering via Visual Entailment"☆31Nov 24, 2021Updated 4 years ago
- A pytroch reimplementation of "Bilinear Attention Network", "Intra- and Inter-modality Attention", "Learning Conditioned Graph Structures…☆297Jan 6, 2026Updated last month