mlfoundations / datacompLinks
DataComp: In search of the next generation of multimodal datasets
ā768Updated 9 months ago
Alternatives and similar repositories for datacomp
Users that are interested in datacomp are comparing it to the libraries listed below
Sorting:
- CLIP-like model evaluationā800Updated 3 weeks ago
- š§ Code and models for the ICML 2023 paper "Grounding Language Models to Images for Multimodal Inputs and Outputs".ā485Updated 2 years ago
- š Code and models for the NeurIPS 2023 paper "Generating Images with Multimodal Language Models".ā471Updated 2 years ago
- [NeurIPS 2023] This repository includes the official implementation of our paper "An Inverse Scaling Law for CLIP Training"ā320Updated last year
- An open source implementation of "Scaling Autoregressive Multi-Modal Models: Pretraining and Instruction Tuning", an all-new multi modal ā¦ā364Updated 2 years ago
- ā643Updated last year
- Open reproduction of MUSE for fast text2image generation.ā359Updated last year
- Robust fine-tuning of zero-shot modelsā759Updated 3 years ago
- This is the official repository for the LENS (Large Language Models Enhanced to See) system.ā356Updated 6 months ago
- Implementation of 𦩠Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorchā1,273Updated 3 years ago
- NeurIPS 2025 Spotlight; ICLR2024 Spotlight; CVPR 2024; EMNLP 2024ā1,810Updated 2 months ago
- GIT: A Generative Image-to-text Transformer for Vision and Languageā581Updated 2 years ago
- Conceptual 12M is a dataset containing (image-URL, caption) pairs collected for vision-and-language pre-training.ā414Updated 6 months ago
- Code release for "Learning Video Representations from Large Language Models"ā536Updated 2 years ago
- MultimodalC4 is a multimodal extension of c4 that interleaves millions of images with text.ā952Updated 10 months ago
- Code used for the creation of OBELICS, an open, massive and curated collection of interleaved image-text web documents, containing 141M dā¦ā211Updated last year
- When do we not need larger vision models?ā412Updated last year
- Large-scale text-video dataset. 10 million captioned short videos.ā674Updated last year
- Easily create large video dataset from video urlsā648Updated last year
- PyTorch Implementation of "V* : Guided Visual Search as a Core Mechanism in Multimodal LLMs"ā688Updated 2 years ago
- [NeurIPS 2023] Official implementations of "Cheap and Quick: Efficient Vision-Language Instruction Tuning for Large Language Models"ā525Updated 2 years ago
- The official repo for the paper "VeCLIP: Improving CLIP Training via Visual-enriched Captions"ā250Updated last year
- Official code for VisProg (CVPR 2023 Best Paper!)ā758Updated last year
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.ā360Updated last year
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Compositionā667Updated last year
- A concise but complete implementation of CLIP with various experimental improvements from recent papersā722Updated 2 years ago
- Learning from synthetic data - code and modelsā327Updated 2 years ago
- Research Trends in LLM-guided Multimodal Learning.ā357Updated 2 years ago
- Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference timeā505Updated last year
- A repository for research on medium sized language models.ā531Updated 8 months ago