LuoweiZhou / VLP
Vision-Language Pre-training for Image Captioning and Question Answering
☆417Updated 3 years ago
Alternatives and similar repositories for VLP:
Users that are interested in VLP are comparing it to the libraries listed below
- Transformer-based image captioning extension for pytorch/fairseq☆315Updated 4 years ago
- Research code for ECCV 2020 paper "UNITER: UNiversal Image-TExt Representation Learning"☆792Updated 3 years ago
- ☆476Updated 2 years ago
- An implementation that downstreams pre-trained V+L models to VQA tasks. Now support: VisualBERT, LXMERT, and UNITER☆163Updated 2 years ago
- PyTorch bottom-up attention with Detectron2☆233Updated 3 years ago
- PyTorch code for EMNLP 2019 paper "LXMERT: Learning Cross-Modality Encoder Representations from Transformers".☆948Updated 2 years ago
- Grid features pre-training code for visual question answering☆269Updated 3 years ago
- Code for the paper "VisualBERT: A Simple and Performant Baseline for Vision and Language"☆535Updated last year
- Meshed-Memory Transformer for Image Captioning. CVPR 2020☆534Updated 2 years ago
- Implementation of the Object Relation Transformer for Image Captioning☆178Updated 7 months ago
- project page for VinVL☆354Updated last year
- Faster RCNN model in Pytorch version, pretrained on the Visual Genome with ResNet 101☆236Updated 2 years ago
- Oscar and VinVL☆1,048Updated last year
- A PyTorch reimplementation of bottom-up-attention models☆300Updated 3 years ago
- Python 3 support for the MS COCO caption evaluation tools☆317Updated 8 months ago
- Implementation of 'X-Linear Attention Networks for Image Captioning' [CVPR 2020]☆273Updated 3 years ago
- Multi Task Vision and Language☆811Updated 3 years ago
- Code for paper "Attention on Attention for Image Captioning". ICCV 2019☆333Updated 3 years ago
- PyTorch implementation of Image captioning with Bottom-up, Top-down Attention☆166Updated 6 years ago
- Automatic image captioning model based on Caffe, using features from bottom-up attention.☆245Updated 2 years ago
- Conceptual Captions is a dataset containing (image-URL, caption) pairs designed for the training and evaluation of machine learned image …☆534Updated 3 years ago
- Code for ICLR 2020 paper "VL-BERT: Pre-training of Generic Visual-Linguistic Representations".☆740Updated last year
- A lightweight, scalable, and general framework for visual question answering research☆322Updated 3 years ago
- This repository focus on Image Captioning & Video Captioning & Seq-to-Seq Learning & NLP☆413Updated 2 years ago
- Code for Unsupervised Image Captioning☆217Updated 2 years ago
- Deep Modular Co-Attention Networks for Visual Question Answering☆452Updated 4 years ago
- ☆220Updated 3 years ago
- Image Captioning Using Transformer☆263Updated 2 years ago
- PyTorch code for "Unifying Vision-and-Language Tasks via Text Generation" (ICML 2021)☆369Updated last year
- An efficient PyTorch implementation of the winning entry of the 2017 VQA Challenge.☆757Updated last year