sanyam5 / irlc-vqa-counting
Code for Interpretable Counting for Visual Question Answering for ICLR 2018 reproducibility challenge.
☆19Updated 6 years ago
Alternatives and similar repositories for irlc-vqa-counting:
Users that are interested in irlc-vqa-counting are comparing it to the libraries listed below
- Hadamard Product for Low-rank Bilinear Pooling☆70Updated 7 years ago
- Structured Attentions for Visual Question Answering☆46Updated 7 years ago
- An unofficial PyTorch implementation of the HAN and AdaHAN models presented in the "Learning Visual Question Answering by Bootstrapping H…☆54Updated 6 years ago
- Code release for Hu et al. Modeling Relationships in Referential Expressions with Compositional Modular Networks. in CVPR, 2017☆67Updated 6 years ago
- Visaul Question Generation as Dual Task of Visual Question Answering (PyTorch Version)☆81Updated 6 years ago
- Visual Storytelling API☆36Updated 8 years ago
- Localize objects in images using referring expressions☆37Updated 8 years ago
- Adds SPICE metric to coco-caption evaluation server codes☆49Updated 2 years ago
- Code release for Hu et al., Explainable Neural Computation via Stack Neural Module Networks. in ECCV, 2018☆71Updated 5 years ago
- The implementation of Text-guided Attention Model for Image Captioning☆21Updated 7 years ago
- Visual Question Answering Project with state of the art single Model performance.☆131Updated 6 years ago
- Attention-based Visual Question Answering in Torch☆100Updated 7 years ago
- code for Stacked attention networks for image question answering☆108Updated 8 years ago
- Transfer Learning via Unsupervised Task Discovery for Visual Question Answering☆31Updated 6 years ago
- Toolkit for Visual7W visual question answering dataset☆76Updated 5 years ago
- [ICLR 2018] Learning to Count Objects in Natural Images for Visual Question Answering☆205Updated 6 years ago
- Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering☆24Updated 4 years ago
- ☆30Updated 6 years ago
- Code for the Grounded Visual Question Answering (GVQA) model from the paper -- Don't Just Assume; Look and Answer: Overcoming Priors for …☆22Updated 3 years ago
- [COLING 2018] Learning Visually-Grounded Semantics from Contrastive Adversarial Samples.☆57Updated 5 years ago
- image caption with semantic attention☆11Updated 8 years ago
- Implementation of CVPR 2016 paper☆75Updated 4 years ago
- Implements an MLP for VQA☆7Updated 8 years ago
- Modular and Simple approach to VQA in Keras☆21Updated 7 years ago
- Implementation of Diverse and Accurate Image Description Using a Variational Auto-Encoder with an Additive Gaussian Encoding Space☆58Updated 7 years ago
- Co-attending Regions and Detections for VQA.☆40Updated 6 years ago
- Torch implementation for Stacked Attention Networks☆23Updated 8 years ago
- Released code for the paper: Where To Look: Focus Regions for Visual Question Answering. (CVPR2016)☆10Updated 5 years ago
- Tensorflow implement of paper: Optimization of image description metrics using policy gradient methods☆29Updated 6 years ago
- Code for Discriminability objective for training descriptive captions(CVPR 2018)☆109Updated 5 years ago