apple / ml-mobileclip
This repository contains the official implementation of the research paper, "MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training" CVPR 2024
☆828Updated 2 months ago
Alternatives and similar repositories for ml-mobileclip:
Users that are interested in ml-mobileclip are comparing it to the libraries listed below
- 4M: Massively Multimodal Masked Modeling☆1,686Updated this week
- This repository provides the code and model checkpoints for AIMv1 and AIMv2 research projects.☆1,182Updated 3 months ago
- ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Expert…☆1,352Updated 2 months ago
- Grounding DINO 1.5: IDEA Research's Most Capable Open-World Object Detection Model Series☆897Updated last month
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆828Updated 2 months ago
- Quick exploration into fine tuning florence 2☆299Updated 5 months ago
- VisionLLM Series☆1,002Updated 2 weeks ago
- Strong and Open Vision Language Assistant for Mobile Devices☆1,139Updated 10 months ago
- PyTorch Implementation of "V* : Guided Visual Search as a Core Mechanism in Multimodal LLMs"☆568Updated last year
- LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.☆470Updated last month
- Official repository for "AM-RADIO: Reduce All Domains Into One"☆914Updated 2 weeks ago
- Famous Vision Language Models and Their Architectures☆646Updated last week
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills☆722Updated last year
- The official repo for the paper "VeCLIP: Improving CLIP Training via Visual-enriched Captions"☆239Updated 3 weeks ago
- When do we not need larger vision models?☆368Updated last week
- ☆323Updated 4 months ago
- Hiera: A fast, powerful, and simple hierarchical vision transformer.☆951Updated 11 months ago
- ☆599Updated last year
- RobustSAM: Segment Anything Robustly on Degraded Images (CVPR 2024 Highlight)☆336Updated 5 months ago
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.☆2,606Updated this week
- Efficient Track Anything☆479Updated last month
- LLaVA-Interactive-Demo☆362Updated 6 months ago
- ☆706Updated 11 months ago
- [AAAI 2025] Official PyTorch implementation of "TinySAM: Pushing the Envelope for Efficient Segment Anything Model"☆441Updated last month
- This repository contains the official implementation of the research paper, "FastViT: A Fast Hybrid Vision Transformer using Structural R…☆1,864Updated last year
- DataComp: In search of the next generation of multimodal datasets☆679Updated last year
- Code for the Molmo Vision-Language Model☆292Updated 2 months ago
- Official Implementation of CVPR24 highligt paper: Matching Anything by Segmenting Anything☆1,203Updated 3 months ago
- Official Repository of paper VideoGPT+: Integrating Image and Video Encoders for Enhanced Video Understanding☆258Updated 6 months ago
- Recipes for shrinking, optimizing, customizing cutting edge vision models. 💜☆1,197Updated this week