apple / ml-mobileclipLinks
This repository contains the official implementation of the research papers, "MobileCLIP" CVPR 2024 and "MobileCLIP2" TMLR August 2025
☆1,399Updated 3 months ago
Alternatives and similar repositories for ml-mobileclip
Users that are interested in ml-mobileclip are comparing it to the libraries listed below
Sorting:
- This repository provides the code and model checkpoints for AIMv1 and AIMv2 research projects.☆1,395Updated 5 months ago
- 4M: Massively Multimodal Masked Modeling☆1,785Updated 7 months ago
- NeurIPS 2025 Spotlight; ICLR2024 Spotlight; CVPR 2024; EMNLP 2024☆1,803Updated 2 months ago
- Grounding DINO 1.5: IDEA Research's Most Capable Open-World Object Detection Model Series☆1,081Updated last year
- This repository contains the official implementation of the research paper, "FastViT: A Fast Hybrid Vision Transformer using Structural R…☆1,983Updated 2 years ago
- State-of-the-art Image & Video CLIP, Multimodal Large Language Models, and More!☆2,113Updated this week
- Strong and Open Vision Language Assistant for Mobile Devices☆1,322Updated last year
- Official PyTorch implementation of "EdgeSAM: Prompt-In-the-Loop Distillation for On-Device Deployment of SAM"☆1,111Updated 8 months ago
- Efficient Track Anything☆769Updated last year
- Official repository for "AM-RADIO: Reduce All Domains Into One"☆1,431Updated 3 weeks ago
- Hiera: A fast, powerful, and simple hierarchical vision transformer.☆1,050Updated last year
- Code for the Molmo Vision-Language Model☆858Updated last year
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.☆3,330Updated 8 months ago
- DINO-X: The World's Top-Performing Vision Model for Open-World Object Detection and Understanding☆1,323Updated 6 months ago
- Official Implementation of CVPR24 highlight paper: Matching Anything by Segmenting Anything☆1,361Updated 8 months ago
- LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.☆594Updated last month
- YOLOE: Real-Time Seeing Anything [ICCV 2025]☆2,011Updated 7 months ago
- Quick exploration into fine tuning florence 2☆339Updated last year
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆939Updated 5 months ago
- VisionLLM Series☆1,133Updated 11 months ago
- [ICCV 2025] Implementation for Describe Anything: Detailed Localized Image and Video Captioning☆1,446Updated 7 months ago
- Pocket-Sized Multimodal AI for content understanding and generation across multilingual texts, images, and 🔜 video, up to 5x faster than…☆1,215Updated 2 months ago
- [AAAI 2025] Official PyTorch implementation of "TinySAM: Pushing the Envelope for Efficient Segment Anything Model"☆534Updated last year
- Projects based on SigLIP (Zhai et. al, 2023) and Hugging Face transformers integration 🤗☆297Updated 11 months ago
- [ICCV 2025] OpenVision: A Fully-Open, Cost-Effective Family of Advanced Vision Encoders for Multimodal Learning☆431Updated last month
- A family of lightweight multimodal models.☆1,051Updated last year
- Efficient vision foundation models for high-resolution generation and perception.☆3,212Updated 4 months ago
- Official code for "FeatUp: A Model-Agnostic Frameworkfor Features at Any Resolution" ICLR 2024☆1,622Updated last year
- VILA is a family of state-of-the-art vision language models (VLMs) for diverse multimodal AI tasks across the edge, data center, and clou…☆3,731Updated last month
- [ICLR 2024] Official PyTorch implementation of FasterViT: Fast Vision Transformers with Hierarchical Attention☆902Updated 6 months ago