apple / ml-mobileclipLinks
This repository contains the official implementation of the research paper, "MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training" CVPR 2024
☆949Updated 6 months ago
Alternatives and similar repositories for ml-mobileclip
Users that are interested in ml-mobileclip are comparing it to the libraries listed below
Sorting:
- This repository provides the code and model checkpoints for AIMv1 and AIMv2 research projects.☆1,290Updated last month
- Grounding DINO 1.5: IDEA Research's Most Capable Open-World Object Detection Model Series☆950Updated 4 months ago
- ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Expert…☆1,450Updated 2 months ago
- 4M: Massively Multimodal Masked Modeling☆1,721Updated last week
- Strong and Open Vision Language Assistant for Mobile Devices☆1,221Updated last year
- Official PyTorch implementation of "EdgeSAM: Prompt-In-the-Loop Distillation for On-Device Deployment of SAM"☆1,016Updated last week
- Efficient Track Anything☆553Updated 4 months ago
- Official repository for "AM-RADIO: Reduce All Domains Into One"☆1,166Updated last week
- State-of-the-art Image & Video CLIP, Multimodal Large Language Models, and More!☆1,172Updated this week
- LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.☆520Updated 2 months ago
- Quick exploration into fine tuning florence 2☆314Updated 8 months ago
- This repository contains the official implementation of the research paper, "FastViT: A Fast Hybrid Vision Transformer using Structural R…☆1,921Updated last year
- [NeurIPS 2024] SlimSAM: 0.1% Data Makes Segment Anything Slim☆328Updated 3 months ago
- YOLOE: Real-Time Seeing Anything☆1,304Updated last month
- The official repo for the paper "VeCLIP: Improving CLIP Training via Visual-enriched Captions"☆244Updated 4 months ago
- DINO-X: The World's Top-Performing Vision Model for Open-World Object Detection and Understanding☆1,064Updated last week
- MobileLLM Optimizing Sub-billion Parameter Language Models for On-Device Use Cases. In ICML 2024.☆1,301Updated last month
- [AAAI 2025] Official PyTorch implementation of "TinySAM: Pushing the Envelope for Efficient Segment Anything Model"☆479Updated 4 months ago
- ☆345Updated 8 months ago
- RobustSAM: Segment Anything Robustly on Degraded Images (CVPR 2024 Highlight)☆350Updated 9 months ago
- Grounded SAM 2: Ground and Track Anything in Videos with Grounding DINO, Florence-2 and SAM 2☆2,231Updated last week
- Official code for "FeatUp: A Model-Agnostic Frameworkfor Features at Any Resolution" ICLR 2024☆1,515Updated 11 months ago
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills☆743Updated last year
- Projects based on SigLIP (Zhai et. al, 2023) and Hugging Face transformers integration 🤗☆237Updated 3 months ago
- Official Implementation of CVPR24 highlight paper: Matching Anything by Segmenting Anything☆1,297Updated last month
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆883Updated 6 months ago
- LLaVA-Interactive-Demo☆371Updated 10 months ago
- PyTorch Implementation of "V* : Guided Visual Search as a Core Mechanism in Multimodal LLMs"☆619Updated last year
- Hiera: A fast, powerful, and simple hierarchical vision transformer.☆985Updated last year
- [ECCV 2024] Tokenize Anything via Prompting☆582Updated 5 months ago