google-research-datasets / conceptual-captionsLinks
Conceptual Captions is a dataset containing (image-URL, caption) pairs designed for the training and evaluation of machine learned image captioning systems.
☆556Updated 4 years ago
Alternatives and similar repositories for conceptual-captions
Users that are interested in conceptual-captions are comparing it to the libraries listed below
Sorting:
- Vision-Language Pre-training for Image Captioning and Question Answering☆424Updated 3 years ago
- Multi Task Vision and Language☆822Updated 3 years ago
- Conceptual 12M is a dataset containing (image-URL, caption) pairs collected for vision-and-language pre-training.☆409Updated 4 months ago
- Code for the paper "VisualBERT: A Simple and Performant Baseline for Vision and Language"☆539Updated 2 years ago
- [CVPR 2021 Best Student Paper Honorable Mention, Oral] Official PyTorch code for ClipBERT, an efficient framework for end-to-end learning…☆724Updated 2 years ago
- Transformer-based image captioning extension for pytorch/fairseq☆317Updated 4 years ago
- [CVPR 2021] VirTex: Learning Visual Representations from Textual Annotations☆564Updated 3 months ago
- Oscar and VinVL☆1,051Updated 2 years ago
- ☆390Updated 4 years ago
- Grid features pre-training code for visual question answering☆269Updated 4 years ago
- Reliably download millions of images efficiently☆118Updated 4 years ago
- PyTorch code for "Unifying Vision-and-Language Tasks via Text Generation" (ICML 2021)☆374Updated 2 years ago
- Research code for ECCV 2020 paper "UNITER: UNiversal Image-TExt Representation Learning"☆797Updated 4 years ago
- project page for VinVL☆359Updated 2 years ago
- PyTorch bottom-up attention with Detectron2☆238Updated 3 years ago