apple / ml-mobileclipLinks
This repository contains the official implementation of the research papers, "MobileCLIP" CVPR 2024 and "MobileCLIP2" TMLR August 2025
☆1,275Updated 3 weeks ago
Alternatives and similar repositories for ml-mobileclip
Users that are interested in ml-mobileclip are comparing it to the libraries listed below
Sorting:
- This repository provides the code and model checkpoints for AIMv1 and AIMv2 research projects.☆1,378Updated 2 months ago
- Grounding DINO 1.5: IDEA Research's Most Capable Open-World Object Detection Model Series☆1,050Updated 9 months ago
- ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Expert…☆1,699Updated last month
- 4M: Massively Multimodal Masked Modeling☆1,767Updated 5 months ago
- State-of-the-art Image & Video CLIP, Multimodal Large Language Models, and More!☆1,698Updated last month
- Official repository for "AM-RADIO: Reduce All Domains Into One"☆1,376Updated 2 weeks ago
- Strong and Open Vision Language Assistant for Mobile Devices☆1,288Updated last year
- Efficient Track Anything☆657Updated 9 months ago
- YOLOE: Real-Time Seeing Anything [ICCV 2025]☆1,857Updated 4 months ago
- Official Implementation of CVPR24 highlight paper: Matching Anything by Segmenting Anything☆1,346Updated 6 months ago
- Official PyTorch implementation of "EdgeSAM: Prompt-In-the-Loop Distillation for On-Device Deployment of SAM"☆1,068Updated 5 months ago
- This repository contains the official implementation of the research paper, "FastViT: A Fast Hybrid Vision Transformer using Structural R…☆1,962Updated last year
- DINO-X: The World's Top-Performing Vision Model for Open-World Object Detection and Understanding☆1,265Updated 3 months ago
- LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.☆557Updated 4 months ago
- Hiera: A fast, powerful, and simple hierarchical vision transformer.☆1,036Updated last year
- VisionLLM Series☆1,119Updated 8 months ago
- [ICCV 2025] Implementation for Describe Anything: Detailed Localized Image and Video Captioning☆1,372Updated 4 months ago
- Code for the Molmo Vision-Language Model☆786Updated 10 months ago
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆922Updated 2 months ago
- [ICCV 2025] OpenVision: A Fully-Open, Cost-Effective Family of Advanced Vision Encoders for Multimodal Learning☆400Updated last month
- Quick exploration into fine tuning florence 2☆334Updated last year
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.☆3,203Updated 5 months ago
- [AAAI 2025] Official PyTorch implementation of "TinySAM: Pushing the Envelope for Efficient Segment Anything Model"☆522Updated 9 months ago
- ☆388Updated last year
- Grounded SAM 2: Ground and Track Anything in Videos with Grounding DINO, Florence-2 and SAM 2☆2,955Updated this week
- Efficient vision foundation models for high-resolution generation and perception.☆3,112Updated last month
- MobileLLM Optimizing Sub-billion Parameter Language Models for On-Device Use Cases. In ICML 2024.☆1,386Updated 6 months ago
- Official code for "FeatUp: A Model-Agnostic Frameworkfor Features at Any Resolution" ICLR 2024☆1,594Updated last year
- A family of lightweight multimodal models.☆1,046Updated 11 months ago
- Famous Vision Language Models and Their Architectures☆1,064Updated 8 months ago