mlfoundations / datacomp
DataComp: In search of the next generation of multimodal datasets
β674Updated last year
Alternatives and similar repositories for datacomp:
Users that are interested in datacomp are comparing it to the libraries listed below
- CLIP-like model evaluationβ649Updated 5 months ago
- [NeurIPS 2023] This repository includes the official implementation of our paper "An Inverse Scaling Law for CLIP Training"β306Updated 7 months ago
- π Code and models for the NeurIPS 2023 paper "Generating Images with Multimodal Language Models".β444Updated 11 months ago
- π§ Code and models for the ICML 2023 paper "Grounding Language Models to Images for Multimodal Inputs and Outputs".β477Updated last year
- β588Updated 11 months ago
- ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Expertβ¦β1,337Updated last month
- An open source implementation of "Scaling Autoregressive Multi-Modal Models: Pretraining and Instruction Tuning", an all-new multi modal β¦β363Updated last year
- Implementation of 𦩠Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorchβ1,227Updated 2 years ago
- Robust fine-tuning of zero-shot modelsβ667Updated 2 years ago
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.β324Updated this week
- Code used for the creation of OBELICS, an open, massive and curated collection of interleaved image-text web documents, containing 141M dβ¦β193Updated 4 months ago
- Open reproduction of MUSE for fast text2image generation.β338Updated 7 months ago
- Research Trends in LLM-guided Multimodal Learning.β356Updated last year
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Compositionβ609Updated 5 months ago
- When do we not need larger vision models?β354Updated last month
- This is the official repository for the LENS (Large Language Models Enhanced to See) system.β351Updated last year
- Conceptual 12M is a dataset containing (image-URL, caption) pairs collected for vision-and-language pre-training.β375Updated last year
- Learning from synthetic data - code and modelsβ307Updated last year
- [NeurIPS 2023] Official implementations of "Cheap and Quick: Efficient Vision-Language Instruction Tuning for Large Language Models"β513Updated 11 months ago
- Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference timeβ436Updated 6 months ago
- Official implementation of SEED-LLaMA (ICLR 2024).β596Updated 3 months ago
- β756Updated 6 months ago
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skillsβ720Updated 11 months ago
- MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities (ICML 2024)β278Updated 2 months ago
- Code release for "Learning Video Representations from Large Language Models"β499Updated last year
- Densely Captioned Images (DCI) dataset repository.β167Updated 6 months ago
- [CVPR 2024 π₯] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses thaβ¦β814Updated last month
- Large-scale text-video dataset. 10 million captioned short videos.β616Updated 5 months ago
- β304Updated 11 months ago
- A concise but complete implementation of CLIP with various experimental improvements from recent papersβ704Updated last year