GT-Vision-Lab / VQALinks
☆384Updated 4 years ago
Alternatives and similar repositories for VQA
Users that are interested in VQA are comparing it to the libraries listed below
Sorting:
- Python 3 support for the MS COCO caption evaluation tools☆326Updated last year
- An efficient PyTorch implementation of the winning entry of the 2017 VQA Challenge.☆758Updated last year
- Strong baseline for visual question answering☆241Updated 2 years ago
- Automatic image captioning model based on Caffe, using features from bottom-up attention.☆248Updated 2 years ago
- Semantic Propositional Image Caption Evaluation☆143Updated 2 years ago
- Referring Expression Datasets API☆537Updated last year
- Visual Q&A reading list☆438Updated 6 years ago
- python codes for CIDEr - Consensus-based Image Caption Evaluation☆97Updated 8 years ago
- PyTorch Code for the paper "VSE++: Improving Visual-Semantic Embeddings with Hard Negatives"☆519Updated 3 years ago
- ☆350Updated 6 years ago
- Grid features pre-training code for visual question answering☆269Updated 4 years ago
- Visual Question Answering in Pytorch☆732Updated 5 years ago
- A lightweight, scalable, and general framework for visual question answering research☆327Updated 4 years ago
- A pytroch reimplementation of "Bilinear Attention Network", "Intra- and Inter-modality Attention", "Learning Conditioned Graph Structures…☆295Updated last year
- A python wrapper for the Visual Genome API☆364Updated last year
- Deep Modular Co-Attention Networks for Visual Question Answering☆455Updated 4 years ago
- [ICLR 2018] Learning to Count Objects in Natural Images for Visual Question Answering☆207Updated 6 years ago
- ☆1,196Updated last year
- Conceptual Captions is a dataset containing (image-URL, caption) pairs designed for the training and evaluation of machine learned image …☆550Updated 4 years ago
- Bilinear attention networks for visual question answering☆545Updated last year
- Toolkit for Visual7W visual question answering dataset☆78Updated 5 years ago
- Unofficial pytorch implementation for Self-critical Sequence Training for Image Captioning. and others.☆1,006Updated last year
- Bottom-up attention model for image captioning and VQA, based on Faster R-CNN and Visual Genome☆1,454Updated 2 years ago
- Vision-Language Pre-training for Image Captioning and Question Answering☆424Updated 3 years ago
- Repository for our CVPR 2017 and IJCV: TGIF-QA☆176Updated 4 years ago
- A PyTorch reimplementation of bottom-up-attention models☆303Updated 3 years ago
- Faster RCNN model in Pytorch version, pretrained on the Visual Genome with ResNet 101☆239Updated 2 years ago
- Code for paper "Attention on Attention for Image Captioning". ICCV 2019☆336Updated 4 years ago
- PyTorch implementation of Image captioning with Bottom-up, Top-down Attention☆165Updated 6 years ago
- Image Caption metrics: Bleu、Cider、Meteor、Rouge、Spice☆111Updated 6 years ago