apple / ml-mobileclipLinks
This repository contains the official implementation of the research paper, "MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training" CVPR 2024
☆971Updated 7 months ago
Alternatives and similar repositories for ml-mobileclip
Users that are interested in ml-mobileclip are comparing it to the libraries listed below
Sorting:
- This repository provides the code and model checkpoints for AIMv1 and AIMv2 research projects.☆1,305Updated 2 months ago
- ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Expert…☆1,463Updated 3 months ago
- 4M: Massively Multimodal Masked Modeling☆1,735Updated 3 weeks ago
- Grounding DINO 1.5: IDEA Research's Most Capable Open-World Object Detection Model Series☆968Updated 5 months ago
- Strong and Open Vision Language Assistant for Mobile Devices☆1,235Updated last year
- Official PyTorch implementation of "EdgeSAM: Prompt-In-the-Loop Distillation for On-Device Deployment of SAM"☆1,030Updated last month
- State-of-the-art Image & Video CLIP, Multimodal Large Language Models, and More!☆1,294Updated 3 weeks ago
- Official repository for "AM-RADIO: Reduce All Domains Into One"☆1,211Updated 2 weeks ago
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆889Updated 2 weeks ago
- Quick exploration into fine tuning florence 2☆320Updated 9 months ago
- MobileLLM Optimizing Sub-billion Parameter Language Models for On-Device Use Cases. In ICML 2024.☆1,303Updated 2 months ago
- This repository contains the official implementation of the research paper, "FastViT: A Fast Hybrid Vision Transformer using Structural R…☆1,935Updated last year
- Code for the Molmo Vision-Language Model☆521Updated 6 months ago
- LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.☆527Updated 3 months ago
- Grounded SAM 2: Ground and Track Anything in Videos with Grounding DINO, Florence-2 and SAM 2☆2,347Updated last month
- [AAAI 2025] Official PyTorch implementation of "TinySAM: Pushing the Envelope for Efficient Segment Anything Model"☆487Updated 5 months ago
- Efficient Track Anything☆571Updated 5 months ago
- The official repo for the paper "VeCLIP: Improving CLIP Training via Visual-enriched Captions"☆244Updated 5 months ago
- [NeurIPS 2024] SlimSAM: 0.1% Data Makes Segment Anything Slim☆331Updated 4 months ago
- RobustSAM: Segment Anything Robustly on Degraded Images (CVPR 2024 Highlight)☆354Updated 9 months ago
- Official Implementation of CVPR24 highlight paper: Matching Anything by Segmenting Anything☆1,305Updated last month
- LLaVA-Interactive-Demo☆374Updated 11 months ago
- ☆353Updated 8 months ago
- Projects based on SigLIP (Zhai et. al, 2023) and Hugging Face transformers integration 🤗☆250Updated 4 months ago
- YOLOE: Real-Time Seeing Anything☆1,364Updated last month
- Hiera: A fast, powerful, and simple hierarchical vision transformer.☆994Updated last year
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills☆744Updated last year
- DINO-X: The World's Top-Performing Vision Model for Open-World Object Detection and Understanding☆1,096Updated last week
- [ECCV 2024] Tokenize Anything via Prompting☆585Updated 6 months ago
- VILA is a family of state-of-the-art vision language models (VLMs) for diverse multimodal AI tasks across the edge, data center, and clou…☆3,344Updated last week