google-research-datasets / witLinks
WIT (Wikipedia-based Image Text) Dataset is a large multimodal multilingual dataset comprising 37M+ image-text sets with 11M+ unique images across 100+ languages.
☆1,071Updated 11 months ago
Alternatives and similar repositories for wit
Users that are interested in wit are comparing it to the libraries listed below
Sorting:
- OpenAI CLIP text encoders for multiple languages!☆810Updated 2 years ago
- Code release for SLIP Self-supervision meets Language-Image Pre-training☆778Updated 2 years ago
- Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch☆1,259Updated 2 years ago
- Conceptual Captions is a dataset containing (image-URL, caption) pairs designed for the training and evaluation of machine learned image …☆543Updated 4 years ago
- GIT: A Generative Image-to-text Transformer for Vision and Language☆572Updated last year
- A concise but complete implementation of CLIP with various experimental improvements from recent papers☆713Updated last year
- A PyTorch Lightning solution to training OpenAI's CLIP from scratch.☆709Updated 3 years ago
- Conceptual 12M is a dataset containing (image-URL, caption) pairs collected for vision-and-language pre-training.☆397Updated last month
- Multi Task Vision and Language☆817Updated 3 years ago
- CLIP-like model evaluation☆759Updated 2 weeks ago
- DataComp: In search of the next generation of multimodal datasets☆736Updated 4 months ago
- Oscar and VinVL☆1,051Updated 2 years ago
- ☆1,022Updated 2 years ago
- Robust fine-tuning of zero-shot models☆730Updated 3 years ago
- 🧀 Code and models for the ICML 2023 paper "Grounding Language Models to Images for Multimodal Inputs and Outputs".☆482Updated last year
- Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch☆1,173Updated last year
- [CVPR 2021 Best Student Paper Honorable Mention, Oral] Official PyTorch code for ClipBERT, an efficient framework for end-to-end learning…☆721Updated 2 years ago
- PyTorch code for "Unifying Vision-and-Language Tasks via Text Generation" (ICML 2021)☆373Updated 2 years ago
- TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.☆1,648Updated last week
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆666Updated 2 years ago
- [ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decode…☆867Updated 2 years ago
- Research code for pixel-based encoders of language (PIXEL)☆339Updated last month
- ☆517Updated last year
- Automatically create Faiss knn indices with the most optimal similarity search parameters.☆871Updated last year
- Pix2Seq codebase: multi-tasks with generative modeling (autoregressive and diffusion)☆922Updated last year
- Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch☆872Updated last year
- COYO-700M: Large-scale Image-Text Pair Dataset☆1,234Updated 2 years ago
- Run Effective Large Batch Contrastive Learning Beyond GPU/TPU Memory Constraint☆405Updated last year
- MultimodalC4 is a multimodal extension of c4 that interleaves millions of images with text.☆936Updated 5 months ago
- Recent Advances in Vision and Language PreTrained Models (VL-PTMs)☆1,155Updated 3 years ago