mlfoundations / datacompLinks
DataComp: In search of the next generation of multimodal datasets
β745Updated 6 months ago
Alternatives and similar repositories for datacomp
Users that are interested in datacomp are comparing it to the libraries listed below
Sorting:
- CLIP-like model evaluationβ785Updated last week
- π§ Code and models for the ICML 2023 paper "Grounding Language Models to Images for Multimodal Inputs and Outputs".β483Updated 2 years ago
- π Code and models for the NeurIPS 2023 paper "Generating Images with Multimodal Language Models".β468Updated last year
- [NeurIPS 2023] This repository includes the official implementation of our paper "An Inverse Scaling Law for CLIP Training"β319Updated last year
- β629Updated last year
- Implementation of 𦩠Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorchβ1,266Updated 3 years ago
- This is the official repository for the LENS (Large Language Models Enhanced to See) system.β354Updated 3 months ago
- An open source implementation of "Scaling Autoregressive Multi-Modal Models: Pretraining and Instruction Tuning", an all-new multi modal β¦β361Updated last year
- ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Expertβ¦β1,727Updated last week
- Robust fine-tuning of zero-shot modelsβ748Updated 3 years ago
- Code used for the creation of OBELICS, an open, massive and curated collection of interleaved image-text web documents, containing 141M dβ¦β209Updated last year
- Open reproduction of MUSE for fast text2image generation.β355Updated last year
- When do we not need larger vision models?β412Updated 9 months ago
- PyTorch Implementation of "V* : Guided Visual Search as a Core Mechanism in Multimodal LLMs"β679Updated last year
- GIT: A Generative Image-to-text Transformer for Vision and Languageβ575Updated last year
- Conceptual 12M is a dataset containing (image-URL, caption) pairs collected for vision-and-language pre-training.β405Updated 4 months ago
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.β354Updated 10 months ago
- [NeurIPS 2023] Official implementations of "Cheap and Quick: Efficient Vision-Language Instruction Tuning for Large Language Models"β523Updated last year
- MultimodalC4 is a multimodal extension of c4 that interleaves millions of images with text.β942Updated 7 months ago
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Compositionβ657Updated last year
- Easily create large video dataset from video urlsβ637Updated last year
- Official implementation of SEED-LLaMA (ICLR 2024).β630Updated last year
- Large-scale text-video dataset. 10 million captioned short videos.β663Updated last year
- Official code for VisProg (CVPR 2023 Best Paper!)β751Updated last year
- Research Trends in LLM-guided Multimodal Learning.β356Updated 2 years ago
- Code release for "Learning Video Representations from Large Language Models"β537Updated 2 years ago
- Code/Data for the paper: "LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding"β268Updated last year
- A repository for research on medium sized language models.β518Updated 5 months ago
- Get hundred of million of image+url from the crawling at home dataset and preprocess themβ222Updated last year
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skillsβ760Updated last year