Taaccoo / awesome-vqa-latestLinks
Visual Question Answering Paper List.
☆53Updated 2 years ago
Alternatives and similar repositories for awesome-vqa-latest
Users that are interested in awesome-vqa-latest are comparing it to the libraries listed below
Sorting:
- [CVPR 2021] Counterfactual VQA: A Cause-Effect Look at Language Bias☆123Updated 3 years ago
- A reading list of papers about Visual Question Answering.☆33Updated 2 years ago
- MuKEA: Multimodal Knowledge Extraction and Accumulation for Knowledge-based Visual Question Answering☆96Updated 2 years ago
- PyTorch implementation of "Debiased Visual Question Answering from Feature and Sample Perspectives" (NeurIPS 2021)☆25Updated 2 years ago
- An Empirical Study of GPT-3 for Few-Shot Knowledge-Based VQA, AAAI 2022 (Oral)☆85Updated 3 years ago
- Counterfactual Samples Synthesizing for Robust VQA☆78Updated 2 years ago
- ☆38Updated 2 years ago
- CVPR 2022 (Oral) Pytorch Code for Unsupervised Vision-and-Language Pre-training via Retrieval-based Multi-Granular Alignment☆22Updated 3 years ago
- A collections of papers about VQA-CP datasets and their results☆38Updated 3 years ago
- VisualCOMET: Reasoning about the Dynamic Context of a Still Image☆86Updated 2 years ago
- A curated list of Multimodal Captioning related research(including image captioning, video captioning, and text captioning)☆109Updated 3 years ago
- Controllable mage captioning model with unsupervised modes☆21Updated 2 years ago
- ☆30Updated 2 years ago
- A simplified pytorch version of densecap☆41Updated 7 months ago
- Implementation for MAF: Multimodal Alignment Framework☆46Updated 4 years ago
- ☆104Updated 3 years ago
- Recent Advances in Visual Dialog☆30Updated 2 years ago
- Human-like Controllable Image Captioning with Verb-specific Semantic Roles.☆36Updated 3 years ago
- MERLOT: Multimodal Neural Script Knowledge Models☆224Updated 3 years ago
- Coarse-to-Fine Reasoning for Visual Question Answering (CVPRW'22)☆45Updated 2 years ago
- CVPR 2021 Official Pytorch Code for UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training☆34Updated 3 years ago
- Pytorch code for Language Models with Image Descriptors are Strong Few-Shot Video-Language Learners☆115Updated 2 years ago
- [CVPR 2020] Transform and Tell: Entity-Aware News Image Captioning☆91Updated last year
- GraphVQA: Language-Guided Graph Neural Networks for Scene Graph Question Answering☆65Updated 3 years ago
- Grid features pre-training code for visual question answering☆269Updated 3 years ago
- Code for WACV 2023 paper "VLC-BERT: Visual Question Answering with Contextualized Commonsense Knowledge"☆21Updated 2 years ago
- Dataset and starting code for visual entailment dataset☆111Updated 3 years ago
- Code for our ACL2021 paper: "Check It Again: Progressive Visual Question Answering via Visual Entailment"☆31Updated 3 years ago
- Official code for paper "Spatially Aware Multimodal Transformers for TextVQA" published at ECCV, 2020.☆64Updated 3 years ago
- A curated list of research papers in Video Captioning☆120Updated 4 years ago