facebookresearch / MetaCLIPLinks
ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Experts via Clustering
β1,447Updated 2 months ago
Alternatives and similar repositories for MetaCLIP
Users that are interested in MetaCLIP are comparing it to the libraries listed below
Sorting:
- DataComp: In search of the next generation of multimodal datasetsβ710Updated last month
- [CVPR 2024 π₯] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses thaβ¦β883Updated 6 months ago
- Hiera: A fast, powerful, and simple hierarchical vision transformer.β985Updated last year
- CLIP-like model evaluationβ717Updated last week
- Emu Series: Generative Multimodal Models from BAAIβ1,723Updated 8 months ago
- [CVPR 2023] Official Implementation of X-Decoder for generalized decoding for pixel, image and languageβ1,314Updated last year
- This repository provides the code and model checkpoints for AIMv1 and AIMv2 research projects.β1,290Updated last month
- Robust fine-tuning of zero-shot modelsβ705Updated 3 years ago
- Implementation of 𦩠Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorchβ1,241Updated 2 years ago
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skillsβ743Updated last year
- β612Updated last year
- When do we not need larger vision models?β393Updated 3 months ago
- [ECCV 2024] official code for "Long-CLIP: Unlocking the Long-Text Capability of CLIP"β812Updated 9 months ago
- VisionLLM Seriesβ1,066Updated 3 months ago
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.β2,900Updated last week
- LLaVA-Interactive-Demoβ371Updated 10 months ago
- γICLR 2024π₯γ Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignmentβ810Updated last year
- Autoregressive Model Beats Diffusion: π¦ Llama for Scalable Image Generationβ1,761Updated 9 months ago
- A general representation model across vision, audio, language modalities. Paper: ONE-PEACE: Exploring One General Representation Model Toβ¦β1,038Updated 7 months ago
- [CVPR 2024] Alpha-CLIP: A CLIP Model Focusing on Wherever You Wantβ821Updated 10 months ago
- A method to increase the speed and lower the memory footprint of existing vision transformers.β1,054Updated 11 months ago
- EVA Series: Visual Representation Fantasies from BAAIβ2,496Updated 10 months ago
- LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.β517Updated 2 months ago
- Easily compute clip embeddings and build a clip retrieval system with themβ2,557Updated last year
- PyTorch Implementation of "V* : Guided Visual Search as a Core Mechanism in Multimodal LLMs"β614Updated last year
- A collection of papers on the topic of ``Computer Vision in the Wild (CVinW)''β1,300Updated last year
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).β1,199Updated 11 months ago
- β778Updated 10 months ago
- [ECCV2024] Video Foundation Models & Data for Multimodal Understandingβ1,887Updated last week
- GPT4RoI: Instruction Tuning Large Language Model on Region-of-Interestβ528Updated 11 months ago