facebookresearch / MetaCLIPLinks
NeurIPS 2025 Spotlight; ICLR2024 Spotlight; CVPR 2024; EMNLP 2024
β1,780Updated 3 weeks ago
Alternatives and similar repositories for MetaCLIP
Users that are interested in MetaCLIP are comparing it to the libraries listed below
Sorting:
- This repository provides the code and model checkpoints for AIMv1 and AIMv2 research projects.β1,390Updated 4 months ago
- [CVPR 2024 π₯] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses thaβ¦β935Updated 4 months ago
- CLIP-like model evaluationβ793Updated 2 weeks ago
- DataComp: In search of the next generation of multimodal datasetsβ762Updated 7 months ago
- Hiera: A fast, powerful, and simple hierarchical vision transformer.β1,045Updated last year
- VisionLLM Seriesβ1,131Updated 9 months ago
- Emu Series: Generative Multimodal Models from BAAIβ1,761Updated last year
- PyTorch Implementation of "V* : Guided Visual Search as a Core Mechanism in Multimodal LLMs"β682Updated last year
- [CVPR 2023] Official Implementation of X-Decoder for generalized decoding for pixel, image and languageβ1,337Updated 2 years ago
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.β3,283Updated 7 months ago
- Implementation of 𦩠Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorchβ1,274Updated 3 years ago
- β634Updated last year
- A collection of papers on the topic of ``Computer Vision in the Wild (CVinW)''β1,350Updated last year
- Official repository for "AM-RADIO: Reduce All Domains Into One"β1,419Updated last week
- [ECCV2024] Video Foundation Models & Data for Multimodal Understandingβ2,135Updated last week
- Robust fine-tuning of zero-shot modelsβ756Updated 3 years ago
- A general representation model across vision, audio, language modalities. Paper: ONE-PEACE: Exploring One General Representation Model Toβ¦β1,063Updated last year
- LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.β568Updated 3 weeks ago
- γICLR 2024π₯γ Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignmentβ854Updated last year
- State-of-the-art Image & Video CLIP, Multimodal Large Language Models, and More!β1,948Updated this week
- Cambrian-1 is a family of multimodal LLMs with a vision-centric design.β1,974Updated last month
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skillsβ763Updated last year
- A family of lightweight multimodal models.β1,049Updated last year
- 4M: Massively Multimodal Masked Modelingβ1,779Updated 6 months ago
- A method to increase the speed and lower the memory footprint of existing vision transformers.β1,134Updated last year
- EVA Series: Visual Representation Fantasies from BAAIβ2,624Updated last year
- Autoregressive Model Beats Diffusion: π¦ Llama for Scalable Image Generationβ1,911Updated last year
- Official code for VisProg (CVPR 2023 Best Paper!)β755Updated last year
- Grounded Language-Image Pre-trainingβ2,560Updated last year
- [CVPR 2024] Alpha-CLIP: A CLIP Model Focusing on Wherever You Wantβ857Updated 5 months ago