google-research-datasets / conceptual-captions
Conceptual Captions is a dataset containing (image-URL, caption) pairs designed for the training and evaluation of machine learned image captioning systems.
☆524Updated 3 years ago
Alternatives and similar repositories for conceptual-captions:
Users that are interested in conceptual-captions are comparing it to the libraries listed below
- Vision-Language Pre-training for Image Captioning and Question Answering☆417Updated 3 years ago
- Code for the paper "VisualBERT: A Simple and Performant Baseline for Vision and Language"☆531Updated last year
- ☆473Updated 2 years ago
- Multi Task Vision and Language☆804Updated 2 years ago
- [CVPR 2021] VirTex: Learning Visual Representations from Textual Annotations☆558Updated last year
- Oscar and VinVL☆1,039Updated last year
- Grid features pre-training code for visual question answering☆268Updated 3 years ago
- This repository focus on Image Captioning & Video Captioning & Seq-to-Seq Learning & NLP☆413Updated 2 years ago
- PyTorch code for EMNLP 2019 paper "LXMERT: Learning Cross-Modality Encoder Representations from Transformers".☆942Updated 2 years ago
- Research code for ECCV 2020 paper "UNITER: UNiversal Image-TExt Representation Learning"☆787Updated 3 years ago
- Conceptual 12M is a dataset containing (image-URL, caption) pairs collected for vision-and-language pre-training.☆375Updated last year
- Recognition to Cognition Networks (code for the model in "From Recognition to Cognition: Visual Commonsense Reasoning", CVPR 2019)☆466Updated 3 years ago
- PyTorch bottom-up attention with Detectron2☆231Updated 3 years ago
- project page for VinVL☆351Updated last year
- Transformer-based image captioning extension for pytorch/fairseq☆315Updated 4 years ago
- [CVPR 2021 Best Student Paper Honorable Mention, Oral] Official PyTorch code for ClipBERT, an efficient framework for end-to-end learning…☆713Updated last year
- Python 3 support for the MS COCO caption evaluation tools☆309Updated 5 months ago
- [ICLR 2022] code for "How Much Can CLIP Benefit Vision-and-Language Tasks?" https://arxiv.org/abs/2107.06383☆409Updated 2 years ago
- Meshed-Memory Transformer for Image Captioning. CVPR 2020☆522Updated 2 years ago
- Code for ICLR 2020 paper "VL-BERT: Pre-training of Generic Visual-Linguistic Representations".☆741Updated last year
- A lightweight, scalable, and general framework for visual question answering research☆321Updated 3 years ago
- Pytorch code of for our CVPR 2018 paper "Neural Baby Talk"☆524Updated 5 years ago
- PyTorch Code for the paper "VSE++: Improving Visual-Semantic Embeddings with Hard Negatives"☆496Updated 3 years ago
- Unofficial pytorch implementation for Self-critical Sequence Training for Image Captioning. and others.☆998Updated last year
- Faster RCNN model in Pytorch version, pretrained on the Visual Genome with ResNet 101☆233Updated 2 years ago
- PyTorch code for "Unifying Vision-and-Language Tasks via Text Generation" (ICML 2021)☆366Updated last year
- Implementation of the Object Relation Transformer for Image Captioning☆177Updated 4 months ago
- A PyTorch reimplementation of bottom-up-attention models☆296Updated 2 years ago
- Automatic image captioning model based on Caffe, using features from bottom-up attention.☆246Updated last year
- Deep Modular Co-Attention Networks for Visual Question Answering☆448Updated 4 years ago