facebookresearch / MetaCLIP
ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Experts via Clustering
☆1,352Updated 2 months ago
Alternatives and similar repositories for MetaCLIP:
Users that are interested in MetaCLIP are comparing it to the libraries listed below
- Hiera: A fast, powerful, and simple hierarchical vision transformer.☆951Updated 11 months ago
- DataComp: In search of the next generation of multimodal datasets☆679Updated last year
- [CVPR 2023] Official Implementation of X-Decoder for generalized decoding for pixel, image and language☆1,303Updated last year
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆828Updated 2 months ago
- CLIP-like model evaluation☆664Updated this week
- VisionLLM Series☆1,002Updated 2 weeks ago
- Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch☆1,230Updated 2 years ago
- PyTorch Implementation of "V* : Guided Visual Search as a Core Mechanism in Multimodal LLMs"☆568Updated last year
- This repository provides the code and model checkpoints for AIMv1 and AIMv2 research projects.☆1,182Updated 2 months ago
- Robust fine-tuning of zero-shot models☆669Updated 2 years ago
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills☆722Updated last year
- ☆599Updated last year
- Emu Series: Generative Multimodal Models from BAAI☆1,683Updated 4 months ago
- LLaVA-Interactive-Demo☆362Updated 6 months ago
- [CVPR 2024] Alpha-CLIP: A CLIP Model Focusing on Wherever You Want☆777Updated 6 months ago
- Grounded Language-Image Pre-training☆2,325Updated last year
- When do we not need larger vision models?☆368Updated last week
- GPT4RoI: Instruction Tuning Large Language Model on Region-of-Interest☆522Updated 8 months ago
- 【ICLR 2024🔥】 Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment☆784Updated 10 months ago
- Official code for VisProg (CVPR 2023 Best Paper!)☆704Updated 5 months ago
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).☆1,174Updated 7 months ago
- This is the official repository for the LENS (Large Language Models Enhanced to See) system.☆352Updated last year
- MultimodalC4 is a multimodal extension of c4 that interleaves millions of images with text.☆917Updated 8 months ago
- Autoregressive Model Beats Diffusion: 🦙 Llama for Scalable Image Generation☆1,576Updated 6 months ago
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.☆2,606Updated this week
- Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch☆1,104Updated last year
- TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.☆1,541Updated this week
- ☆765Updated 7 months ago
- A collection of papers on the topic of ``Computer Vision in the Wild (CVinW)''☆1,253Updated 11 months ago
- ☆498Updated 3 months ago