apple / ml-mobileclipLinks
This repository contains the official implementation of the research papers, "MobileCLIP" CVPR 2024 and "MobileCLIP2" TMLR August 2025
☆1,416Updated 4 months ago
Alternatives and similar repositories for ml-mobileclip
Users that are interested in ml-mobileclip are comparing it to the libraries listed below
Sorting:
- This repository provides the code and model checkpoints for AIMv1 and AIMv2 research projects.☆1,396Updated 6 months ago
- 4M: Massively Multimodal Masked Modeling☆1,789Updated 8 months ago
- NeurIPS 2025 Spotlight; ICLR2024 Spotlight; CVPR 2024; EMNLP 2024☆1,810Updated 2 months ago
- State-of-the-art Image & Video CLIP, Multimodal Large Language Models, and More!☆2,147Updated 2 weeks ago
- Strong and Open Vision Language Assistant for Mobile Devices☆1,330Updated last year
- Grounding DINO 1.5: IDEA Research's Most Capable Open-World Object Detection Model Series☆1,083Updated last year
- Official PyTorch implementation of "EdgeSAM: Prompt-In-the-Loop Distillation for On-Device Deployment of SAM"☆1,113Updated 8 months ago
- Official repository for "AM-RADIO: Reduce All Domains Into One"☆1,505Updated last week
- Efficient Track Anything☆775Updated last year
- This repository contains the official implementation of the research paper, "FastViT: A Fast Hybrid Vision Transformer using Structural R…☆1,987Updated 2 years ago
- DINO-X: The World's Top-Performing Vision Model for Open-World Object Detection and Understanding☆1,334Updated 6 months ago
- Hiera: A fast, powerful, and simple hierarchical vision transformer.☆1,052Updated last year
- Code for the Molmo Vision-Language Model☆870Updated last year
- LLM2CLIP significantly improves already state-of-the-art CLIP models.☆623Updated last week
- [AAAI 2025] Official PyTorch implementation of "TinySAM: Pushing the Envelope for Efficient Segment Anything Model"☆535Updated last year
- Official Implementation of CVPR24 highlight paper: Matching Anything by Segmenting Anything☆1,362Updated 9 months ago
- VILA is a family of state-of-the-art vision language models (VLMs) for diverse multimodal AI tasks across the edge, data center, and clou…☆3,737Updated 2 months ago
- YOLOE: Real-Time Seeing Anything [ICCV 2025]☆2,029Updated 7 months ago
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.☆3,343Updated 8 months ago
- A family of lightweight multimodal models.☆1,050Updated last year
- VisionLLM Series☆1,137Updated 11 months ago
- Grounded SAM 2: Ground and Track Anything in Videos with Grounding DINO, Florence-2 and SAM 2☆3,255Updated 2 months ago
- [ICCV 2025] Implementation for Describe Anything: Detailed Localized Image and Video Captioning☆1,448Updated 7 months ago
- [CVPR 2025] Official PyTorch implementation of "EdgeTAM: On-Device Track Anything Model"☆868Updated 2 weeks ago
- Cambrian-1 is a family of multimodal LLMs with a vision-centric design.☆1,985Updated 3 months ago
- Projects based on SigLIP (Zhai et. al, 2023) and Hugging Face transformers integration 🤗☆298Updated 11 months ago
- MobileLLM Optimizing Sub-billion Parameter Language Models for On-Device Use Cases. In ICML 2024.☆1,406Updated 9 months ago
- [ICLR 2024] Official PyTorch implementation of FasterViT: Fast Vision Transformers with Hierarchical Attention☆904Updated 6 months ago
- Official code for "FeatUp: A Model-Agnostic Frameworkfor Features at Any Resolution" ICLR 2024☆1,629Updated last year
- Quick exploration into fine tuning florence 2☆339Updated last year