AishwaryaAgrawal / GVQALinks
Code for the Grounded Visual Question Answering (GVQA) model from the paper -- Don't Just Assume; Look and Answer: Overcoming Priors for Visual Question Answering
☆23Updated 3 years ago
Alternatives and similar repositories for GVQA
Users that are interested in GVQA are comparing it to the libraries listed below
Sorting:
- Code for CVPR'19 "Recursive Visual Attention in Visual Dialog"☆64Updated 2 years ago
- Pytorch implementation of "Explainable and Explicit Visual Reasoning over Scene Graphs "☆93Updated 6 years ago
- Implementation of Diverse and Accurate Image Description Using a Variational Auto-Encoder with an Additive Gaussian Encoding Space☆58Updated 7 years ago
- NeurIPS 2019 Paper: RUBi : Reducing Unimodal Biases for Visual Question Answering☆62Updated 4 years ago
- Visaul Question Generation as Dual Task of Visual Question Answering (PyTorch Version)☆81Updated 7 years ago
- ☆30Updated 6 years ago
- PyTorch code for Learning to Caption Images through a Lifetime by Asking Questions (ICCV 2019)☆16Updated 5 years ago
- Connective Cognition Network for Directional Visual Commonsense Reasoning☆15Updated 4 years ago
- Scene Graph Parsing as Dependency Parsing☆41Updated 6 years ago
- Code for the model "Heterogeneous Graph Learning for Visual Commonsense Reasoning (NeurlPS 2019)"☆47Updated 4 years ago
- Code for Discriminability objective for training descriptive captions(CVPR 2018)☆109Updated 5 years ago
- Code for "bootstrap, review, decode: using out-of-domain textual data to improve image captioning"☆20Updated 8 years ago
- Information Maximizing Visual Question Generation☆66Updated last year
- Pytorch implementation of https://arxiv.org/pdf/1909.10470.pdf☆32Updated 3 years ago
- Torch Implementation of Speaker-Listener-Reinforcer for Referring Expression Generation and Comprehension☆34Updated 7 years ago
- Code release for Hu et al., Explainable Neural Computation via Stack Neural Module Networks. in ECCV, 2018☆71Updated 5 years ago
- Source code for the paper "Speaking the Same Language: Matching Machine to Human Captions by Adversarial Training"☆66Updated 6 years ago
- ✨ Official PyTorch Implementation for EMNLP'19 Paper, "Dual Attention Networks for Visual Reference Resolution in Visual Dialog"☆45Updated 2 years ago
- Code for our paper: Learning Conditioned Graph Structures for Interpretable Visual Question Answering☆149Updated 6 years ago
- [COLING 2018] Learning Visually-Grounded Semantics from Contrastive Adversarial Samples.☆57Updated 5 years ago
- Improved Fusion of Visual and Language Representations by Dense Symmetric Co-Attention for Visual Question Answering☆106Updated 5 years ago
- Implementation for our paper "Conditional Image-Text Embedding Networks"☆39Updated 5 years ago
- Shows visual grounding methods can be right for the wrong reasons! (ACL 2020)☆23Updated 5 years ago
- Code for NeurIPS 2019 paper ``Self-Critical Reasoning for Robust Visual Question Answering''☆41Updated 5 years ago
- Adds SPICE metric to coco-caption evaluation server codes☆50Updated 2 years ago
- Code release for Park et al. Multimodal Multimodal Explanations: Justifying Decisions and Pointing to the Evidence. in CVPR, 2018☆48Updated 6 years ago
- Pre-trained V+L Data Preparation☆46Updated 5 years ago
- ☆54Updated 5 years ago
- PyTorch code for: Learning to Generate Grounded Visual Captions without Localization Supervision☆44Updated 4 years ago
- ☆63Updated 3 years ago