cdancette / detect-shortcutsView external linksLinks
Repo for ICCV 2021 paper: Beyond Question-Based Biases: Assessing Multimodal Shortcut Learning in Visual Question Answering
☆28Jul 1, 2024Updated last year
Alternatives and similar repositories for detect-shortcuts
Users that are interested in detect-shortcuts are comparing it to the libraries listed below
Sorting:
- A collections of papers about VQA-CP datasets and their results☆41Mar 18, 2022Updated 3 years ago
- BottomUpTopDown VQA model with question-type debiasing☆22Oct 6, 2019Updated 6 years ago
- ☆13Feb 14, 2022Updated 3 years ago
- NeurIPS 2019 Paper: RUBi : Reducing Unimodal Biases for Visual Question Answering☆65Mar 29, 2021Updated 4 years ago
- Counterfactual Samples Synthesizing for Robust VQA☆79Nov 24, 2022Updated 3 years ago
- Code for Greedy Gradient Ensemble for Visual Question Answering (ICCV 2021, Oral)☆27Mar 28, 2022Updated 3 years ago
- Demonstrates failures of bias mitigation methods under varying types/levels of biases (WACV 2021)☆26Mar 31, 2024Updated last year
- GQA-OOD is a new dataset and benchmark for the evaluation of VQA models in OOD (out of distribution) settings.☆32Mar 1, 2021Updated 4 years ago
- Shows visual grounding methods can be right for the wrong reasons! (ACL 2020)☆23Jun 26, 2020Updated 5 years ago
- Code for our EMNLP-2022 paper: "Language Prior Is Not the Only Shortcut: A Benchmark for Shortcut Learning in VQA"☆40Nov 1, 2022Updated 3 years ago
- [NeurIPS 2021] Introspective Distillation for Robust Question Answering☆13Dec 7, 2021Updated 4 years ago
- Code for NeurIPS 2019 paper ``Self-Critical Reasoning for Robust Visual Question Answering''☆41Sep 9, 2019Updated 6 years ago
- GAN(TK)²: GAN Neural Tangent Kernel ToolKit☆13Jul 12, 2022Updated 3 years ago
- PyTorch implementation of "Debiased Visual Question Answering from Feature and Sample Perspectives" (NeurIPS 2021)☆27Oct 13, 2022Updated 3 years ago
- The Stream-51 dataset for streaming classification and novelty detection from videos.☆15Feb 22, 2022Updated 3 years ago
- [CVPR 2021] Counterfactual VQA: A Cause-Effect Look at Language Bias☆130Dec 15, 2021Updated 4 years ago
- ☆15Nov 6, 2022Updated 3 years ago
- Code for our IJCAI2020 paper: Overcoming Language Priors with Self-supervised Learning for Visual Question Answering☆52Aug 21, 2020Updated 5 years ago
- Methods of training NLP models to ignored biased strategies☆55May 22, 2023Updated 2 years ago
- Highlevel framework for starting Deep Learning projects (lightweight, flexible, easy to extend)☆196Jun 21, 2022Updated 3 years ago
- the implementation of EMNLP 2020 "Learning to Contrast the Counterfactual Samples for Robust Visual Question Answering"☆15Sep 9, 2021Updated 4 years ago
- Code release for Hu et al., Language-Conditioned Graph Networks for Relational Reasoning. in ICCV, 2019☆92Aug 9, 2019Updated 6 years ago
- This is a pytorch implementation of our Recurrent Aggregation of Multimodal Embeddings Network (RAMEN) from our CVPR-2019 paper.☆17Apr 5, 2020Updated 5 years ago
- Human-like Controllable Image Captioning with Verb-specific Semantic Roles.☆36Mar 11, 2022Updated 3 years ago
- ☆12Jun 17, 2020Updated 5 years ago
- vqa drived by bottom-up and top-down attention and knowledge☆14Nov 21, 2018Updated 7 years ago
- MUREL (CVPR 2019), a multimodal relational reasoning module for VQA☆195Feb 9, 2020Updated 6 years ago
- A self-evident application of the VQA task is to design systems that aid blind people with sight reliant queries. The VizWiz VQA dataset …☆15Dec 12, 2023Updated 2 years ago
- A collection of multimodal datasets, and visual features for VQA and captionning in pytorch. Just run "pip install multimodal"☆84Feb 25, 2022Updated 3 years ago
- Code release for Park et al. Multimodal Multimodal Explanations: Justifying Decisions and Pointing to the Evidence. in CVPR, 2018☆48Jul 27, 2018Updated 7 years ago
- [ACL 2025 Findings] "Worse than Random? An Embarrassingly Simple Probing Evaluation of Large Multimodal Models in Medical VQA"☆25Feb 21, 2025Updated 11 months ago
- [ICLR 2024 Poster] SCHEMA: State CHangEs MAtter for Procedure Planning in Instructional Videos☆20Aug 21, 2025Updated 5 months ago
- ☆18Apr 10, 2023Updated 2 years ago
- ☆20Oct 21, 2022Updated 3 years ago
- CVPR 2022 (Oral) Pytorch Code for Unsupervised Vision-and-Language Pre-training via Retrieval-based Multi-Granular Alignment☆22Apr 15, 2022Updated 3 years ago
- Research Code for NeurIPS 2020 Spotlight paper "Large-Scale Adversarial Training for Vision-and-Language Representation Learning": LXMERT…☆21Oct 20, 2020Updated 5 years ago
- visual question answering prompting recipes for large vision-language models☆28Sep 14, 2024Updated last year
- ☆26Apr 15, 2021Updated 4 years ago
- Official codebase for "Ref-NMS: Breaking Proposal Bottlenecks in Two-Stage Referring Expression Grounding"☆22Dec 20, 2020Updated 5 years ago