google-research-datasets / witLinks
WIT (Wikipedia-based Image Text) Dataset is a large multimodal multilingual dataset comprising 37M+ image-text sets with 11M+ unique images across 100+ languages.
β1,063Updated 9 months ago
Alternatives and similar repositories for wit
Users that are interested in wit are comparing it to the libraries listed below
Sorting:
- Code release for SLIP Self-supervision meets Language-Image Pre-trainingβ769Updated 2 years ago
- Implementation of 𦩠Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorchβ1,248Updated 2 years ago
- OpenAI CLIP text encoders for multiple languages!β802Updated 2 years ago
- Conceptual 12M is a dataset containing (image-URL, caption) pairs collected for vision-and-language pre-training.β394Updated 2 years ago
- CLIP-like model evaluationβ726Updated last week
- Oscar and VinVLβ1,049Updated last year
- A concise but complete implementation of CLIP with various experimental improvements from recent papersβ713Updated last year
- Conceptual Captions is a dataset containing (image-URL, caption) pairs designed for the training and evaluation of machine learned image β¦β539Updated 3 years ago
- Multi Task Vision and Languageβ813Updated 3 years ago
- [CVPR 2021 Best Student Paper Honorable Mention, Oral] Official PyTorch code for ClipBERT, an efficient framework for end-to-end learningβ¦β722Updated last year
- Vision-Language Pre-training for Image Captioning and Question Answeringβ419Updated 3 years ago
- π§ Code and models for the ICML 2023 paper "Grounding Language Models to Images for Multimodal Inputs and Outputs".β482Updated last year
- A PyTorch Lightning solution to training OpenAI's CLIP from scratch.β698Updated 3 years ago
- Research code for ECCV 2020 paper "UNITER: UNiversal Image-TExt Representation Learning"β793Updated 3 years ago
- Robust fine-tuning of zero-shot modelsβ717Updated 3 years ago
- Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorchβ1,151Updated last year
- [CVPR 2022] Official code for "Unified Contrastive Learning in Image-Text-Label Space"β400Updated last year
- GIT: A Generative Image-to-text Transformer for Vision and Languageβ568Updated last year
- ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Expertβ¦β1,463Updated 3 months ago
- PyTorch code for "Unifying Vision-and-Language Tasks via Text Generation" (ICML 2021)β371Updated last year
- Language Models Can See: Plugging Visual Controls in Text Generationβ256Updated 3 years ago
- DataComp: In search of the next generation of multimodal datasetsβ719Updated last month
- Recent Advances in Vision and Language Pre-training (VLP)β293Updated 2 years ago
- MERLOT: Multimodal Neural Script Knowledge Modelsβ224Updated 3 years ago
- project page for VinVLβ355Updated last year
- Research code for pixel-based encoders of language (PIXEL)β335Updated last year
- β1,011Updated 2 years ago
- [ICLR 2022] code for "How Much Can CLIP Benefit Vision-and-Language Tasks?" https://arxiv.org/abs/2107.06383β412Updated 2 years ago
- Code for ALBEF: a new vision-language pre-training methodβ1,669Updated 2 years ago
- Recent Advances in Vision and Language PreTrained Models (VL-PTMs)β1,153Updated 2 years ago