google-research-datasets / wit
WIT (Wikipedia-based Image Text) Dataset is a large multimodal multilingual dataset comprising 37M+ image-text sets with 11M+ unique images across 100+ languages.
β1,049Updated 7 months ago
Alternatives and similar repositories for wit:
Users that are interested in wit are comparing it to the libraries listed below
- Implementation of 𦩠Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorchβ1,240Updated 2 years ago
- A concise but complete implementation of CLIP with various experimental improvements from recent papersβ709Updated last year
- Code release for SLIP Self-supervision meets Language-Image Pre-trainingβ766Updated 2 years ago
- Conceptual 12M is a dataset containing (image-URL, caption) pairs collected for vision-and-language pre-training.β389Updated 2 years ago
- Oscar and VinVLβ1,049Updated last year
- Conceptual Captions is a dataset containing (image-URL, caption) pairs designed for the training and evaluation of machine learned image β¦β534Updated 3 years ago
- OpenAI CLIP text encoders for multiple languages!β796Updated last year
- DataComp: In search of the next generation of multimodal datasetsβ703Updated last week
- [ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decodeβ¦β849Updated last year
- Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorchβ864Updated last year
- CLIP-like model evaluationβ703Updated last month
- A PyTorch Lightning solution to training OpenAI's CLIP from scratch.β691Updated 3 years ago
- TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.β1,591Updated this week
- GIT: A Generative Image-to-text Transformer for Vision and Languageβ567Updated last year
- Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorchβ1,132Updated last year
- Code release for "Learning Video Representations from Large Language Models"β519Updated last year
- MultimodalC4 is a multimodal extension of c4 that interleaves millions of images with text.β929Updated last month
- π§ Code and models for the ICML 2023 paper "Grounding Language Models to Images for Multimodal Inputs and Outputs".β482Updated last year
- Research code for pixel-based encoders of language (PIXEL)β335Updated last year
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigmβ653Updated 2 years ago
- Robust fine-tuning of zero-shot modelsβ698Updated 3 years ago
- Language Models Can See: Plugging Visual Controls in Text Generationβ257Updated 2 years ago
- Multi Task Vision and Languageβ812Updated 3 years ago
- Automatically create Faiss knn indices with the most optimal similarity search parameters.β853Updated 11 months ago
- Recent Advances in Vision and Language PreTrained Models (VL-PTMs)β1,152Updated 2 years ago
- [CVPR 2021 Best Student Paper Honorable Mention, Oral] Official PyTorch code for ClipBERT, an efficient framework for end-to-end learningβ¦β719Updated last year
- [ICLR 2022] code for "How Much Can CLIP Benefit Vision-and-Language Tasks?" https://arxiv.org/abs/2107.06383β411Updated 2 years ago
- Code for ALBEF: a new vision-language pre-training methodβ1,648Updated 2 years ago
- Flexible components pairing π€ Transformers with Pytorch Lightningβ608Updated 2 years ago
- Code for the ICML 2021 (long talk) paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision"β1,458Updated last year