BottomUpTopDown VQA model with question-type debiasing
☆22Oct 6, 2019Updated 6 years ago
Alternatives and similar repositories for bottom-up-attention-vqa
Users that are interested in bottom-up-attention-vqa are comparing it to the libraries listed below
Sorting:
- Counterfactual Samples Synthesizing for Robust VQA☆79Nov 24, 2022Updated 3 years ago
- Code for NeurIPS 2019 paper ``Self-Critical Reasoning for Robust Visual Question Answering''☆41Sep 9, 2019Updated 6 years ago
- Repo for ICCV 2021 paper: Beyond Question-Based Biases: Assessing Multimodal Shortcut Learning in Visual Question Answering☆29Jul 1, 2024Updated last year
- A collections of papers about VQA-CP datasets and their results☆41Mar 18, 2022Updated 3 years ago
- Code for our IJCAI2020 paper: Overcoming Language Priors with Self-supervised Learning for Visual Question Answering☆52Aug 21, 2020Updated 5 years ago
- Methods of training NLP models to ignored biased strategies☆55May 22, 2023Updated 2 years ago
- ☆13Feb 14, 2022Updated 4 years ago
- NeurIPS 2019 Paper: RUBi : Reducing Unimodal Biases for Visual Question Answering☆65Mar 29, 2021Updated 4 years ago
- ☆12Mar 8, 2021Updated 5 years ago
- Code for Greedy Gradient Ensemble for Visual Question Answering (ICCV 2021, Oral)☆27Mar 28, 2022Updated 3 years ago
- ☆34Jan 5, 2021Updated 5 years ago
- Shows visual grounding methods can be right for the wrong reasons! (ACL 2020)☆23Jun 26, 2020Updated 5 years ago
- MUREL (CVPR 2019), a multimodal relational reasoning module for VQA☆195Feb 9, 2020Updated 6 years ago
- [CVPR 2021] Counterfactual VQA: A Cause-Effect Look at Language Bias☆130Dec 15, 2021Updated 4 years ago
- Grid features pre-training code for visual question answering☆269Sep 17, 2021Updated 4 years ago
- ☆27Oct 7, 2021Updated 4 years ago
- A lightweight, scalable, and general framework for visual question answering research☆330Sep 3, 2021Updated 4 years ago
- Demonstrates failures of bias mitigation methods under varying types/levels of biases (WACV 2021)☆26Mar 31, 2024Updated last year
- GQA-OOD is a new dataset and benchmark for the evaluation of VQA models in OOD (out of distribution) settings.☆32Mar 1, 2021Updated 5 years ago
- ☆17Sep 2, 2023Updated 2 years ago
- ☆79Oct 8, 2022Updated 3 years ago
- Deep Modular Co-Attention Networks for Visual Question Answering☆457Dec 16, 2020Updated 5 years ago
- Implementation for the paper "Hierarchical Conditional Relation Networks for Video Question Answering" (Le et al., CVPR 2020, Oral)☆134Jul 25, 2024Updated last year
- This repository provides the dataset introduced by our WSSTG paper☆13Jul 21, 2019Updated 6 years ago
- Heterogeneous Memory Enhanced Multimodal Attention Model for VideoQA☆54Sep 13, 2021Updated 4 years ago
- R-VQA: Visual Question Answering with Relation Facts☆19May 11, 2021Updated 4 years ago
- Code release for Hu et al., Language-Conditioned Graph Networks for Relational Reasoning. in ICCV, 2019☆92Aug 9, 2019Updated 6 years ago
- An efficient PyTorch implementation of the winning entry of the 2017 VQA Challenge.☆765Mar 10, 2024Updated last year
- ☆16Dec 28, 2020Updated 5 years ago
- Human-like Controllable Image Captioning with Verb-specific Semantic Roles.☆36Mar 11, 2022Updated 3 years ago
- ☆12Jun 17, 2020Updated 5 years ago
- vqa drived by bottom-up and top-down attention and knowledge☆14Nov 21, 2018Updated 7 years ago
- Official implementation for the MM'22 paper.☆14Jun 30, 2022Updated 3 years ago
- Neural State Machine implemented in PyTorch☆71Oct 10, 2019Updated 6 years ago
- ☆77Nov 22, 2022Updated 3 years ago
- CVPR 2022 (Oral) Pytorch Code for Unsupervised Vision-and-Language Pre-training via Retrieval-based Multi-Granular Alignment☆22Apr 15, 2022Updated 3 years ago
- A curated list of research papers in Referring Expression Comprehension (REC)☆46May 13, 2021Updated 4 years ago
- Research Code for ICCV 2019 paper "Relation-aware Graph Attention Network for Visual Question Answering"☆187Apr 15, 2021Updated 4 years ago
- Official code for the paper "Contrast and Classify: Training Robust VQA Models" published at ICCV, 2021☆19Jul 27, 2021Updated 4 years ago