GT-Vision-Lab / VQA
☆380Updated 4 years ago
Alternatives and similar repositories for VQA
Users that are interested in VQA are comparing it to the libraries listed below
Sorting:
- Visual Q&A reading list☆437Updated 6 years ago
- ☆350Updated 6 years ago
- An efficient PyTorch implementation of the winning entry of the 2017 VQA Challenge.☆758Updated last year
- Strong baseline for visual question answering☆239Updated 2 years ago
- Automatic image captioning model based on Caffe, using features from bottom-up attention.☆245Updated 2 years ago
- Visual Question Answering in Pytorch☆729Updated 5 years ago
- A lightweight, scalable, and general framework for visual question answering research☆323Updated 3 years ago
- A pytroch reimplementation of "Bilinear Attention Network", "Intra- and Inter-modality Attention", "Learning Conditioned Graph Structures…☆295Updated 9 months ago
- Grid features pre-training code for visual question answering☆269Updated 3 years ago
- Python 3 support for the MS COCO caption evaluation tools☆318Updated 9 months ago
- ☆219Updated 8 years ago
- A python wrapper for the Visual Genome API☆363Updated last year
- [ICLR 2018] Learning to Count Objects in Natural Images for Visual Question Answering☆205Updated 6 years ago
- Toolkit for Visual7W visual question answering dataset☆75Updated 5 years ago
- Referring Expression Datasets API☆514Updated 8 months ago
- Bottom-up attention model for image captioning and VQA, based on Faster R-CNN and Visual Genome☆1,447Updated 2 years ago
- Semantic Propositional Image Caption Evaluation☆140Updated 2 years ago
- Deep Modular Co-Attention Networks for Visual Question Answering☆454Updated 4 years ago
- PyTorch Code for the paper "VSE++: Improving Visual-Semantic Embeddings with Hard Negatives"☆507Updated 3 years ago
- python codes for CIDEr - Consensus-based Image Caption Evaluation☆94Updated 8 years ago
- ☆1,184Updated last year
- code for Stacked attention networks for image question answering☆108Updated 8 years ago
- Pytorch implementation of winner from VQA Chllange Workshop in CVPR'17☆163Updated 6 years ago
- Vision-Language Pre-training for Image Captioning and Question Answering☆418Updated 3 years ago
- Faster RCNN model in Pytorch version, pretrained on the Visual Genome with ResNet 101☆237Updated 2 years ago
- Repository for our CVPR 2017 and IJCV: TGIF-QA☆175Updated 3 years ago
- Train a deeper LSTM and normalized CNN Visual Question Answering model. This current code can get 58.16 on OpenEnded and 63.09 on Multipl…☆383Updated 6 years ago
- Bilinear attention networks for visual question answering☆545Updated last year
- Pytorch code of for our CVPR 2018 paper "Neural Baby Talk"☆525Updated 6 years ago
- Code for the paper "VisualBERT: A Simple and Performant Baseline for Vision and Language"☆536Updated 2 years ago