This repository contains the official implementation of the research papers, "MobileCLIP" CVPR 2024 and "MobileCLIP2" TMLR August 2025
☆1,443Oct 9, 2025Updated 4 months ago
Alternatives and similar repositories for ml-mobileclip
Users that are interested in ml-mobileclip are comparing it to the libraries listed below
Sorting:
- 4M: Massively Multimodal Masked Modeling☆1,788Jun 2, 2025Updated 9 months ago
- This repository provides the code and model checkpoints for AIMv1 and AIMv2 research projects.☆1,402Aug 4, 2025Updated 7 months ago
- NeurIPS 2025 Spotlight; ICLR2024 Spotlight; CVPR 2024; EMNLP 2024☆1,815Nov 27, 2025Updated 3 months ago
- This repository contains the official implementation of the research paper, "FastViT: A Fast Hybrid Vision Transformer using Structural R…☆1,990Nov 30, 2023Updated 2 years ago
- An open source implementation of CLIP.☆13,430Updated this week
- Strong and Open Vision Language Assistant for Mobile Devices☆1,334Apr 15, 2024Updated last year
- Official PyTorch implementation of "EdgeSAM: Prompt-In-the-Loop Distillation for On-Device Deployment of SAM"☆1,117May 24, 2025Updated 9 months ago
- [CVPR 2024] Real-Time Open-Vocabulary Object Detection☆6,217Feb 26, 2025Updated last year
- Grounding DINO 1.5: IDEA Research's Most Capable Open-World Object Detection Model Series☆1,086Jan 21, 2025Updated last year
- Repository for the paper: "TiC-CLIP: Continual Training of CLIP Models" ICLR 2024☆111Jun 11, 2024Updated last year
- Zero-label image classification via OpenCLIP knowledge distillation☆142Sep 12, 2023Updated 2 years ago
- Efficient vision foundation models for high-resolution generation and perception.☆3,249Sep 5, 2025Updated 5 months ago
- The official repo for the paper "VeCLIP: Improving CLIP Training via Visual-enriched Captions"☆251Jan 22, 2025Updated last year
- YOLOE: Real-Time Seeing Anything [ICCV 2025]☆2,051Jun 26, 2025Updated 8 months ago
- This repository contains the official implementation of the research paper, "An Improved One millisecond Mobile Backbone" CVPR 2023.☆817Jul 25, 2022Updated 3 years ago
- This repository contains the official implementation of "FastVLM: Efficient Vision Encoding for Vision Language Models" - CVPR 2025☆7,224May 5, 2025Updated 9 months ago
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.☆3,371May 19, 2025Updated 9 months ago
- [ECCV 2024] Official implementation of the paper "Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection"☆9,760Aug 12, 2024Updated last year
- This is the official code for MobileSAM project that makes SAM lightweight for mobile applications and beyond!☆5,631Dec 19, 2025Updated 2 months ago
- EVA Series: Visual Representation Fantasies from BAAI☆2,647Aug 1, 2024Updated last year
- Easily compute clip embeddings and build a clip retrieval system with them☆2,732Aug 15, 2025Updated 6 months ago
- RayGen: Multi-Modal Dataset Reinforcement for MobileCLIP and MobileCLIP2☆39Aug 29, 2025Updated 6 months ago
- CLIP-Finder enables semantic offline searches of images from gallery photos using natural language descriptions or the camera. Built on A…☆90Jul 25, 2024Updated last year
- RepViT: Revisiting Mobile CNN From ViT Perspective [CVPR 2024] and RepViT-SAM: Towards Real-Time Segmenting Anything☆1,065Jun 14, 2024Updated last year
- ☆4,577Sep 14, 2025Updated 5 months ago
- The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained mode…☆18,560Dec 25, 2024Updated last year
- PyTorch code and models for the DINOv2 self-supervised learning method.☆12,427Feb 24, 2026Updated last week
- ☆8,685Oct 9, 2024Updated last year
- VILA is a family of state-of-the-art vision language models (VLMs) for diverse multimodal AI tasks across the edge, data center, and clou…☆3,766Nov 28, 2025Updated 3 months ago
- Grounded Language-Image Pre-training☆2,575Jan 24, 2024Updated 2 years ago
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.☆24,478Aug 12, 2024Updated last year
- [ICCV2023] TinyCLIP: CLIP Distillation via Affinity Mimicking and Weight Inheritance☆127Jul 16, 2024Updated last year
- Cambrian-1 is a family of multimodal LLMs with a vision-centric design.☆1,986Nov 7, 2025Updated 3 months ago
- ☆59Mar 14, 2024Updated last year
- Official repository for "AM-RADIO: Reduce All Domains Into One"☆1,665Feb 11, 2026Updated 3 weeks ago
- CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image☆32,642Feb 18, 2026Updated 2 weeks ago
- Grounded SAM: Marrying Grounding DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect , Segment and …☆17,431Sep 5, 2024Updated last year
- Depth Pro: Sharp Monocular Metric Depth in Less Than a Second.☆5,313Apr 21, 2025Updated 10 months ago
- [ECCV 2024] official code for "Long-CLIP: Unlocking the Long-Text Capability of CLIP"☆893Aug 13, 2024Updated last year