mlfoundations / datacompLinks
DataComp: In search of the next generation of multimodal datasets
β745Updated 5 months ago
Alternatives and similar repositories for datacomp
Users that are interested in datacomp are comparing it to the libraries listed below
Sorting:
- CLIP-like model evaluationβ771Updated last month
- π§ Code and models for the ICML 2023 paper "Grounding Language Models to Images for Multimodal Inputs and Outputs".β481Updated last year
- [NeurIPS 2023] This repository includes the official implementation of our paper "An Inverse Scaling Law for CLIP Training"β316Updated last year
- π Code and models for the NeurIPS 2023 paper "Generating Images with Multimodal Language Models".β466Updated last year
- This is the official repository for the LENS (Large Language Models Enhanced to See) system.β354Updated 2 months ago
- An open source implementation of "Scaling Autoregressive Multi-Modal Models: Pretraining and Instruction Tuning", an all-new multi modal β¦β362Updated last year
- ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Expertβ¦β1,680Updated last week
- β628Updated last year
- Robust fine-tuning of zero-shot modelsβ743Updated 3 years ago
- Implementation of 𦩠Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorchβ1,263Updated 2 years ago
- Conceptual 12M is a dataset containing (image-URL, caption) pairs collected for vision-and-language pre-training.β402Updated 2 months ago
- Open reproduction of MUSE for fast text2image generation.β358Updated last year
- Easily create large video dataset from video urlsβ634Updated last year
- MultimodalC4 is a multimodal extension of c4 that interleaves millions of images with text.β941Updated 6 months ago
- GIT: A Generative Image-to-text Transformer for Vision and Languageβ574Updated last year
- When do we not need larger vision models?β407Updated 7 months ago
- Code used for the creation of OBELICS, an open, massive and curated collection of interleaved image-text web documents, containing 141M dβ¦β206Updated last year
- Large-scale text-video dataset. 10 million captioned short videos.β659Updated last year
- The official repo for the paper "VeCLIP: Improving CLIP Training via Visual-enriched Captions"β246Updated 8 months ago
- Code release for "Learning Video Representations from Large Language Models"β536Updated 2 years ago
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.β349Updated 8 months ago
- Official implementation of SEED-LLaMA (ICLR 2024).β627Updated last year
- Official code for VisProg (CVPR 2023 Best Paper!)β746Updated last year
- [NeurIPS 2023] Official implementations of "Cheap and Quick: Efficient Vision-Language Instruction Tuning for Large Language Models"β524Updated last year
- PyTorch Implementation of "V* : Guided Visual Search as a Core Mechanism in Multimodal LLMs"β675Updated last year
- Learning from synthetic data - code and modelsβ322Updated last year
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skillsβ758Updated last year
- E5-V: Universal Embeddings with Multimodal Large Language Modelsβ271Updated 9 months ago
- MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities (ICML 2024)β310Updated 8 months ago
- Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing imagβ¦β539Updated last year