jialinwu17 / self_critical_vqaView external linksLinks
Code for NeurIPS 2019 paper ``Self-Critical Reasoning for Robust Visual Question Answering''
☆41Sep 9, 2019Updated 6 years ago
Alternatives and similar repositories for self_critical_vqa
Users that are interested in self_critical_vqa are comparing it to the libraries listed below
Sorting:
- BottomUpTopDown VQA model with question-type debiasing☆22Oct 6, 2019Updated 6 years ago
- Counterfactual Samples Synthesizing for Robust VQA☆79Nov 24, 2022Updated 3 years ago
- Shows visual grounding methods can be right for the wrong reasons! (ACL 2020)☆23Jun 26, 2020Updated 5 years ago
- NeurIPS 2019 Paper: RUBi : Reducing Unimodal Biases for Visual Question Answering☆65Mar 29, 2021Updated 4 years ago
- [NeurIPS 2021] Introspective Distillation for Robust Question Answering☆13Dec 7, 2021Updated 4 years ago
- A collections of papers about VQA-CP datasets and their results☆41Mar 18, 2022Updated 3 years ago
- Methods of training NLP models to ignored biased strategies☆55May 22, 2023Updated 2 years ago
- ☆13Feb 14, 2022Updated 4 years ago
- GQA-OOD is a new dataset and benchmark for the evaluation of VQA models in OOD (out of distribution) settings.☆32Mar 1, 2021Updated 4 years ago
- Code release for Hu et al., Language-Conditioned Graph Networks for Relational Reasoning. in ICCV, 2019☆92Aug 9, 2019Updated 6 years ago
- Code for our ACL2021 paper: "Check It Again: Progressive Visual Question Answering via Visual Entailment"☆31Nov 24, 2021Updated 4 years ago
- Repo for ICCV 2021 paper: Beyond Question-Based Biases: Assessing Multimodal Shortcut Learning in Visual Question Answering☆28Jul 1, 2024Updated last year
- Deep Modular Co-Attention Networks for Visual Question Answering☆458Dec 16, 2020Updated 5 years ago
- Re-implementation for 'R-VQA: Learning Visual Relation Facts with Semantic Attention for Visual Question Answering'.☆12Apr 11, 2019Updated 6 years ago
- An efficient PyTorch implementation of the winning entry of the 2017 VQA Challenge.☆765Mar 10, 2024Updated last year
- MUREL (CVPR 2019), a multimodal relational reasoning module for VQA☆195Feb 9, 2020Updated 6 years ago
- vqa drived by bottom-up and top-down attention and knowledge☆14Nov 21, 2018Updated 7 years ago
- VQA baseline with Conditional Batch Normalization☆15Apr 9, 2018Updated 7 years ago
- This is a pytorch implementation of our Recurrent Aggregation of Multimodal Embeddings Network (RAMEN) from our CVPR-2019 paper.☆17Apr 5, 2020Updated 5 years ago
- Official implementation for the MM'22 paper.☆14Jun 30, 2022Updated 3 years ago
- ☆30Dec 16, 2022Updated 3 years ago
- Code for our paper: Learning Conditioned Graph Structures for Interpretable Visual Question Answering☆150Mar 11, 2019Updated 6 years ago
- Code for the model "Heterogeneous Graph Learning for Visual Commonsense Reasoning (NeurlPS 2019)"☆47Jul 27, 2020Updated 5 years ago
- Unofficial reimplementation of Dynamic Fusion with Intra- and Inter-modality Attention Flow for Visual Question Answering☆18Oct 30, 2019Updated 6 years ago
- Bilinear attention networks for visual question answering☆548Oct 30, 2023Updated 2 years ago
- [ICLR 2024 Poster] SCHEMA: State CHangEs MAtter for Procedure Planning in Instructional Videos☆20Aug 21, 2025Updated 5 months ago
- Grid features pre-training code for visual question answering☆269Sep 17, 2021Updated 4 years ago
- CVPR 2022 (Oral) Pytorch Code for Unsupervised Vision-and-Language Pre-training via Retrieval-based Multi-Granular Alignment☆22Apr 15, 2022Updated 3 years ago
- A lightweight, scalable, and general framework for visual question answering research☆330Sep 3, 2021Updated 4 years ago
- Adaptive Reconstruction Network for Weakly Supervised Referring Expression Grounding☆33Aug 29, 2019Updated 6 years ago
- Research Code for ICCV 2019 paper "Relation-aware Graph Attention Network for Visual Question Answering"☆187Apr 15, 2021Updated 4 years ago
- Heterogeneous Memory Enhanced Multimodal Attention Model for VideoQA☆54Sep 13, 2021Updated 4 years ago
- Official code for the paper "Contrast and Classify: Training Robust VQA Models" published at ICCV, 2021☆19Jul 27, 2021Updated 4 years ago
- The good practice in the VQA system such as pos-tag attention, structed triplet learning and triplet attention is very general and can be…☆19Jan 23, 2018Updated 8 years ago
- Visual Question Answering Project with state of the art single Model performance.☆131Jun 18, 2018Updated 7 years ago
- Code for Greedy Gradient Ensemble for Visual Question Answering (ICCV 2021, Oral)☆27Mar 28, 2022Updated 3 years ago
- ☆27Oct 7, 2021Updated 4 years ago
- A pytorch implemetation of data augmentation method for visual question answering☆21May 25, 2023Updated 2 years ago
- [ECCV2022] Rethinking Data Augmentation for Robust Visual Question Answering☆13Nov 23, 2022Updated 3 years ago