apple / ml-mobileclip
This repository contains the official implementation of the research paper, "MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training" CVPR 2024
☆918Updated 5 months ago
Alternatives and similar repositories for ml-mobileclip:
Users that are interested in ml-mobileclip are comparing it to the libraries listed below
- This repository provides the code and model checkpoints for AIMv1 and AIMv2 research projects.☆1,279Updated 2 weeks ago
- 4M: Massively Multimodal Masked Modeling☆1,719Updated 2 months ago
- ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Expert…☆1,430Updated last month
- Grounding DINO 1.5: IDEA Research's Most Capable Open-World Object Detection Model Series☆943Updated 3 months ago
- Official PyTorch implementation of "EdgeSAM: Prompt-In-the-Loop Distillation for On-Device Deployment of SAM"☆1,007Updated 8 months ago
- Official repository for "AM-RADIO: Reduce All Domains Into One"☆1,141Updated 2 weeks ago
- Efficient Track Anything☆536Updated 4 months ago
- Quick exploration into fine tuning florence 2☆309Updated 7 months ago
- Hiera: A fast, powerful, and simple hierarchical vision transformer.☆979Updated last year
- LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.☆508Updated last month
- Famous Vision Language Models and Their Architectures☆814Updated 2 months ago
- Strong and Open Vision Language Assistant for Mobile Devices☆1,208Updated last year
- [AAAI 2025] Official PyTorch implementation of "TinySAM: Pushing the Envelope for Efficient Segment Anything Model"☆470Updated 3 months ago
- The simplest, fastest repository for training/finetuning small-sized VLMs.☆1,126Updated this week
- The official repo for the paper "VeCLIP: Improving CLIP Training via Visual-enriched Captions"☆244Updated 3 months ago
- This repository contains the official implementation of the research paper, "FastViT: A Fast Hybrid Vision Transformer using Structural R…☆1,901Updated last year
- Official code for "FeatUp: A Model-Agnostic Frameworkfor Features at Any Resolution" ICLR 2024☆1,502Updated 10 months ago
- CLIP inference in plain C/C++ with no extra dependencies☆497Updated 8 months ago
- Projects based on SigLIP (Zhai et. al, 2023) and Hugging Face transformers integration 🤗☆231Updated 2 months ago
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆871Updated 5 months ago
- ☆345Updated 7 months ago
- YOLOE: Real-Time Seeing Anything☆1,183Updated last week
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.☆2,843Updated last month
- VisionLLM Series☆1,054Updated 2 months ago
- Official code of "EVF-SAM: Early Vision-Language Fusion for Text-Prompted Segment Anything Model"☆404Updated last month
- VILA is a family of state-of-the-art vision language models (VLMs) for diverse multimodal AI tasks across the edge, data center, and clou…☆3,218Updated this week
- Grounded SAM 2: Ground and Track Anything in Videos with Grounding DINO, Florence-2 and SAM 2☆2,072Updated 2 weeks ago
- State-of-the-art Image & Video CLIP, Multimodal Large Language Models, and More!☆943Updated last week
- [NeurIPS 2024] Code release for "Segment Anything without Supervision"☆461Updated 7 months ago
- Official Implementation of CVPR24 highlight paper: Matching Anything by Segmenting Anything☆1,268Updated last week