MILVLG / openvqa
A lightweight, scalable, and general framework for visual question answering research
☆321Updated 3 years ago
Alternatives and similar repositories for openvqa:
Users that are interested in openvqa are comparing it to the libraries listed below
- Deep Modular Co-Attention Networks for Visual Question Answering☆449Updated 4 years ago
- Grid features pre-training code for visual question answering☆268Updated 3 years ago
- A PyTorch reimplementation of bottom-up-attention models☆297Updated 2 years ago
- A pytroch reimplementation of "Bilinear Attention Network", "Intra- and Inter-modality Attention", "Learning Conditioned Graph Structures…☆294Updated 6 months ago
- Code for paper "Attention on Attention for Image Captioning". ICCV 2019☆333Updated 3 years ago
- Faster RCNN model in Pytorch version, pretrained on the Visual Genome with ResNet 101☆234Updated 2 years ago
- PyTorch implementation of Image captioning with Bottom-up, Top-down Attention☆166Updated 6 years ago
- Implementation of 'X-Linear Attention Networks for Image Captioning' [CVPR 2020]☆273Updated 3 years ago
- This repository focus on Image Captioning & Video Captioning & Seq-to-Seq Learning & NLP☆413Updated 2 years ago
- Strong baseline for visual question answering☆239Updated last year
- ☆220Updated 2 years ago
- Automatic image captioning model based on Caffe, using features from bottom-up attention.☆245Updated 2 years ago
- Implementation of the Object Relation Transformer for Image Captioning☆177Updated 5 months ago
- PyTorch bottom-up attention with Detectron2☆231Updated 3 years ago
- Research Code for ICCV 2019 paper "Relation-aware Graph Attention Network for Visual Question Answering"☆182Updated 3 years ago
- MUREL (CVPR 2019), a multimodal relational reasoning module for VQA☆194Updated 5 years ago
- Vision-Language Pre-training for Image Captioning and Question Answering☆417Updated 3 years ago
- BLOCK (AAAI 2019), with a multimodal fusion library for deep learning models☆349Updated 5 years ago
- ☆474Updated 2 years ago
- Code accompanying the paper "Say As You Wish: Fine-grained Control of Image Caption Generation with Abstract Scene Graphs" (Chen et al., …☆199Updated 2 years ago
- PyTorch code for EMNLP 2019 paper "LXMERT: Learning Cross-Modality Encoder Representations from Transformers".☆942Updated 2 years ago
- An implementation that downstreams pre-trained V+L models to VQA tasks. Now support: VisualBERT, LXMERT, and UNITER☆163Updated 2 years ago
- An efficient PyTorch implementation of the winning entry of the 2017 VQA Challenge.☆756Updated 11 months ago
- Meshed-Memory Transformer for Image Captioning. CVPR 2020☆524Updated 2 years ago
- Code for Unsupervised Image Captioning☆217Updated last year
- Bottom-up features extractor implemented in PyTorch.☆71Updated 5 years ago
- Bottom-up attention model for image captioning and VQA, based on Faster R-CNN and Visual Genome☆1,441Updated 2 years ago
- A curated list of Visual Question Answering(VQA)(Image/Video Question Answering),Visual Question Generation ,Visual Dialog ,Visual Common…☆659Updated last year
- Research code for ECCV 2020 paper "UNITER: UNiversal Image-TExt Representation Learning"☆787Updated 3 years ago
- Python 3 support for the MS COCO caption evaluation tools☆311Updated 6 months ago