facebookresearch / MetaCLIPLinks
ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Experts via Clustering
☆1,666Updated this week
Alternatives and similar repositories for MetaCLIP
Users that are interested in MetaCLIP are comparing it to the libraries listed below
Sorting:
- This repository provides the code and model checkpoints for AIMv1 and AIMv2 research projects.☆1,359Updated 3 weeks ago
- DataComp: In search of the next generation of multimodal datasets☆736Updated 4 months ago
- Hiera: A fast, powerful, and simple hierarchical vision transformer.☆1,012Updated last year
- Emu Series: Generative Multimodal Models from BAAI☆1,743Updated 11 months ago
- [CVPR 2023] Official Implementation of X-Decoder for generalized decoding for pixel, image and language☆1,328Updated last year
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆908Updated 3 weeks ago
- CLIP-like model evaluation☆759Updated 2 weeks ago
- Robust fine-tuning of zero-shot models☆730Updated 3 years ago
- VisionLLM Series☆1,101Updated 6 months ago
- A general representation model across vision, audio, language modalities. Paper: ONE-PEACE: Exploring One General Representation Model To…☆1,051Updated 10 months ago
- Official repository for "AM-RADIO: Reduce All Domains Into One"☆1,319Updated last week
- A method to increase the speed and lower the memory footprint of existing vision transformers.☆1,084Updated last year
- Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch☆1,259Updated 2 years ago
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.☆3,097Updated 3 months ago
- Cambrian-1 is a family of multimodal LLMs with a vision-centric design.☆1,940Updated 10 months ago
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills☆758Updated last year
- PyTorch Implementation of "V* : Guided Visual Search as a Core Mechanism in Multimodal LLMs"☆669Updated last year
- ☆623Updated last year
- Autoregressive Model Beats Diffusion: 🦙 Llama for Scalable Image Generation☆1,844Updated last year
- Grounded Language-Image Pre-training☆2,487Updated last year
- A collection of papers on the topic of ``Computer Vision in the Wild (CVinW)''☆1,326Updated last year
- This repository contains the official implementation of the research papers, "MobileCLIP" CVPR 2024 and "MobileCLIP2" TMLR August 2025☆1,073Updated this week
- EVA Series: Visual Representation Fantasies from BAAI☆2,556Updated last year
- LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.☆537Updated 2 months ago
- [ECCV2024] Video Foundation Models & Data for Multimodal Understanding☆2,031Updated 3 weeks ago
- [CVPR 2024] Alpha-CLIP: A CLIP Model Focusing on Wherever You Want☆837Updated last month
- State-of-the-art Image & Video CLIP, Multimodal Large Language Models, and More!☆1,542Updated last week
- Official code for VisProg (CVPR 2023 Best Paper!)☆743Updated last year
- Official PyTorch implementation of ODISE: Open-Vocabulary Panoptic Segmentation with Text-to-Image Diffusion Models [CVPR 2023 Highlight]☆924Updated last year
- Pix2Seq codebase: multi-tasks with generative modeling (autoregressive and diffusion)☆922Updated last year