google-research-datasets / witLinks
WIT (Wikipedia-based Image Text) Dataset is a large multimodal multilingual dataset comprising 37M+ image-text sets with 11M+ unique images across 100+ languages.
☆1,067Updated 10 months ago
Alternatives and similar repositories for wit
Users that are interested in wit are comparing it to the libraries listed below
Sorting:
- OpenAI CLIP text encoders for multiple languages!☆809Updated 2 years ago
- Conceptual Captions is a dataset containing (image-URL, caption) pairs designed for the training and evaluation of machine learned image …☆542Updated 3 years ago
- Code release for SLIP Self-supervision meets Language-Image Pre-training☆776Updated 2 years ago
- Oscar and VinVL☆1,051Updated last year
- Conceptual 12M is a dataset containing (image-URL, caption) pairs collected for vision-and-language pre-training.☆394Updated 3 weeks ago
- A concise but complete implementation of CLIP with various experimental improvements from recent papers☆711Updated last year
- Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch☆1,256Updated 2 years ago
- Multi Task Vision and Language☆816Updated 3 years ago
- A PyTorch Lightning solution to training OpenAI's CLIP from scratch.☆706Updated 3 years ago
- Research code for ECCV 2020 paper "UNITER: UNiversal Image-TExt Representation Learning"☆795Updated 4 years ago
- GIT: A Generative Image-to-text Transformer for Vision and Language☆572Updated last year
- Research code for pixel-based encoders of language (PIXEL)☆338Updated 3 weeks ago
- [CVPR 2021 Best Student Paper Honorable Mention, Oral] Official PyTorch code for ClipBERT, an efficient framework for end-to-end learning…☆721Updated 2 years ago
- 🧀 Code and models for the ICML 2023 paper "Grounding Language Models to Images for Multimodal Inputs and Outputs".☆482Updated last year
- Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch☆1,164Updated last year
- Vision-Language Pre-training for Image Captioning and Question Answering☆421Updated 3 years ago
- Robust fine-tuning of zero-shot models☆727Updated 3 years ago
- CLIP-like model evaluation☆748Updated 2 weeks ago
- PyTorch code for "Unifying Vision-and-Language Tasks via Text Generation" (ICML 2021)☆372Updated 2 years ago
- ☆1,020Updated 2 years ago
- Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch☆870Updated last year
- DataComp: In search of the next generation of multimodal datasets☆731Updated 3 months ago
- Code for the paper "VisualBERT: A Simple and Performant Baseline for Vision and Language"☆536Updated 2 years ago
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆664Updated 2 years ago
- TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.☆1,638Updated last week
- project page for VinVL☆357Updated 2 years ago
- [ICLR 2022] code for "How Much Can CLIP Benefit Vision-and-Language Tasks?" https://arxiv.org/abs/2107.06383☆416Updated 2 years ago
- Simple image captioning model☆1,388Updated last year
- Recent Advances in Vision and Language PreTrained Models (VL-PTMs)☆1,154Updated 2 years ago
- COYO-700M: Large-scale Image-Text Pair Dataset☆1,233Updated 2 years ago