jiasenlu / bottom-up-attention
Bottom-up attention model for image captioning and VQA, based on Faster R-CNN and Visual Genome
☆24Updated 5 years ago
Related projects ⓘ
Alternatives and complementary repositories for bottom-up-attention
- [CVPR 2021] Counterfactual VQA: A Cause-Effect Look at Language Bias☆116Updated 2 years ago
- Counterfactual Samples Synthesizing for Robust VQA☆76Updated last year
- ☆39Updated last year
- PyTorch implementation of "Debiased Visual Question Answering from Feature and Sample Perspectives" (NeurIPS 2021)☆22Updated 2 years ago
- The source code of ACL 2020 paper: "Cross-Modality Relevance for Reasoning on Language and Vision"☆26Updated 3 years ago
- Multitask Multilingual Multimodal Pre-training☆70Updated last year
- The code of IJCAI2022 paper, Declaration-based Prompt Tuning for Visual Question Answering☆19Updated 2 years ago
- This repository contains code used in our ACL'20 paper History for Visual Dialog: Do we really need it?☆34Updated last year
- Neural Machine Translation with universal Visual Representation (ICLR 2020)☆87Updated 4 years ago
- MuKEA: Multimodal Knowledge Extraction and Accumulation for Knowledge-based Visual Question Answering☆88Updated last year
- Cross-View Language Modeling: Towards Unified Cross-Lingual Cross-Modal Pre-training (ACL 2023))☆88Updated last year
- BottomUpTopDown VQA model with question-type debiasing☆23Updated 5 years ago
- [EMNLP 2020] What is More Likely to Happen Next? Video-and-Language Future Event Prediction☆48Updated 2 years ago
- Code for our ACL2021 paper: "Check It Again: Progressive Visual Question Answering via Visual Entailment"☆31Updated 2 years ago
- Research Code for NeurIPS 2020 Spotlight paper "Large-Scale Adversarial Training for Vision-and-Language Representation Learning": UNITER…☆119Updated 3 years ago
- Support extracting BUTD features for NLVR2 images.☆18Updated 4 years ago
- A Fast and Accurate One-Stage Approach to Visual Grounding, ICCV 2019 (Oral)☆144Updated 4 years ago
- Dataset and Source code for EMNLP 2019 paper "What You See is What You Get: Visual Pronoun Coreference Resolution in Dialogues"☆25Updated 3 years ago
- ☆76Updated 2 years ago
- ☆32Updated last year
- TCIC: Theme Concepts Learning Cross Language and Vision for Image Captioning in IJCAI2021.☆9Updated 3 years ago
- Code, Models and Datasets for OpenViDial Dataset☆131Updated 2 years ago
- Implementation for "Large-scale Pretraining for Visual Dialog" https://arxiv.org/abs/1912.02379☆95Updated 4 years ago
- [TACL 2021] Code and data for the framework in "Multimodal Pretraining Unmasked: A Meta-Analysis and a Unified Framework of Vision-and-La…☆114Updated 2 years ago
- Pytorch Implementation of MUCKO(2020 IJCAI)☆19Updated 4 years ago
- Unpaired Image Captioning☆35Updated 3 years ago
- ☆65Updated 2 years ago
- CVPR 2021 Official Pytorch Code for UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training☆34Updated 3 years ago
- ☆26Updated last year