mlfoundations / datacomp
DataComp: In search of the next generation of multimodal datasets
β705Updated 2 weeks ago
Alternatives and similar repositories for datacomp
Users that are interested in datacomp are comparing it to the libraries listed below
Sorting:
- CLIP-like model evaluationβ705Updated last month
- π§ Code and models for the ICML 2023 paper "Grounding Language Models to Images for Multimodal Inputs and Outputs".β482Updated last year
- π Code and models for the NeurIPS 2023 paper "Generating Images with Multimodal Language Models".β457Updated last year
- [NeurIPS 2023] This repository includes the official implementation of our paper "An Inverse Scaling Law for CLIP Training"β315Updated 11 months ago
- β609Updated last year
- ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Expertβ¦β1,435Updated 2 months ago
- Implementation of 𦩠Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorchβ1,240Updated 2 years ago
- Robust fine-tuning of zero-shot modelsβ699Updated 3 years ago
- When do we not need larger vision models?β391Updated 3 months ago
- Open reproduction of MUSE for fast text2image generation.β351Updated 11 months ago
- This is the official repository for the LENS (Large Language Models Enhanced to See) system.β352Updated last year
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.β339Updated 4 months ago
- [NeurIPS 2023] Official implementations of "Cheap and Quick: Efficient Vision-Language Instruction Tuning for Large Language Models"β520Updated last year
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skillsβ740Updated last year
- [CVPR 2024 π₯] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses thaβ¦β871Updated 5 months ago
- MultimodalC4 is a multimodal extension of c4 that interleaves millions of images with text.β930Updated last month
- An open source implementation of "Scaling Autoregressive Multi-Modal Models: Pretraining and Instruction Tuning", an all-new multi modal β¦β361Updated last year
- Learning from synthetic data - code and modelsβ315Updated last year
- Research Trends in LLM-guided Multimodal Learning.β358Updated last year
- GPT4RoI: Instruction Tuning Large Language Model on Region-of-Interestβ527Updated 11 months ago
- Conceptual 12M is a dataset containing (image-URL, caption) pairs collected for vision-and-language pre-training.β390Updated 2 years ago
- Code release for "Learning Video Representations from Large Language Models"β519Updated last year
- Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing imagβ¦β522Updated last year
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Compositionβ633Updated 9 months ago
- Official implementation of SEED-LLaMA (ICLR 2024).β612Updated 7 months ago
- LLaVA-UHD v2: an MLLM Integrating High-Resolution Semantic Pyramid via Hierarchical Window Transformerβ376Updated 3 weeks ago
- MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities (ICML 2024)β299Updated 3 months ago
- Code used for the creation of OBELICS, an open, massive and curated collection of interleaved image-text web documents, containing 141M dβ¦β202Updated 8 months ago
- A method to increase the speed and lower the memory footprint of existing vision transformers.β1,051Updated 10 months ago
- Get hundred of million of image+url from the crawling at home dataset and preprocess themβ220Updated 11 months ago