apple / ml-mobileclipLinks
This repository contains the official implementation of the research papers, "MobileCLIP" CVPR 2024 and "MobileCLIP2" TMLR August 2025
☆1,218Updated this week
Alternatives and similar repositories for ml-mobileclip
Users that are interested in ml-mobileclip are comparing it to the libraries listed below
Sorting:
- This repository provides the code and model checkpoints for AIMv1 and AIMv2 research projects.☆1,364Updated last month
- ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Expert…☆1,675Updated 3 weeks ago
- 4M: Massively Multimodal Masked Modeling☆1,764Updated 3 months ago
- State-of-the-art Image & Video CLIP, Multimodal Large Language Models, and More!☆1,607Updated 2 weeks ago
- Grounding DINO 1.5: IDEA Research's Most Capable Open-World Object Detection Model Series☆1,033Updated 8 months ago
- Official repository for "AM-RADIO: Reduce All Domains Into One"☆1,344Updated 3 weeks ago
- LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.☆546Updated 2 months ago
- Strong and Open Vision Language Assistant for Mobile Devices☆1,275Updated last year
- Official PyTorch implementation of "EdgeSAM: Prompt-In-the-Loop Distillation for On-Device Deployment of SAM"☆1,058Updated 3 months ago
- Efficient Track Anything☆635Updated 8 months ago
- YOLOE: Real-Time Seeing Anything [ICCV 2025]☆1,746Updated 2 months ago
- Hiera: A fast, powerful, and simple hierarchical vision transformer.☆1,026Updated last year
- DINO-X: The World's Top-Performing Vision Model for Open-World Object Detection and Understanding☆1,222Updated last month
- This repository contains the official implementation of the research paper, "FastViT: A Fast Hybrid Vision Transformer using Structural R…☆1,955Updated last year
- [ICCV 2025] OpenVision: A Fully-Open, Cost-Effective Family of Advanced Vision Encoders for Multimodal Learning☆378Updated last week
- [ICCV 2025] Implementation for Describe Anything: Detailed Localized Image and Video Captioning☆1,333Updated 2 months ago
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.☆3,135Updated 4 months ago
- Official Implementation of CVPR24 highlight paper: Matching Anything by Segmenting Anything☆1,339Updated 4 months ago
- Famous Vision Language Models and Their Architectures☆1,014Updated 6 months ago
- [AAAI 2025] Official PyTorch implementation of "TinySAM: Pushing the Envelope for Efficient Segment Anything Model"☆515Updated 8 months ago
- Code for the Molmo Vision-Language Model☆748Updated 9 months ago
- MobileLLM Optimizing Sub-billion Parameter Language Models for On-Device Use Cases. In ICML 2024.☆1,356Updated 5 months ago
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆912Updated last month
- A distilled Segment Anything (SAM) model capable of running real-time with NVIDIA TensorRT☆798Updated last year
- Projects based on SigLIP (Zhai et. al, 2023) and Hugging Face transformers integration 🤗☆275Updated 7 months ago
- Quick exploration into fine tuning florence 2☆330Updated last year
- ☆379Updated 11 months ago
- This repository is a curated collection of the most exciting and influential CVPR 2024 papers. 🔥 [Paper + Code + Demo]☆738Updated 3 months ago
- Efficient vision foundation models for high-resolution generation and perception.☆3,076Updated 2 weeks ago
- VisionLLM Series☆1,106Updated 6 months ago