apple / ml-aimLinks
This repository provides the code and model checkpoints for AIMv1 and AIMv2 research projects.
☆1,383Updated 3 months ago
Alternatives and similar repositories for ml-aim
Users that are interested in ml-aim are comparing it to the libraries listed below
Sorting:
- 4M: Massively Multimodal Masked Modeling☆1,770Updated 5 months ago
- ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Expert…☆1,704Updated last month
- Cambrian-1 is a family of multimodal LLMs with a vision-centric design.☆1,964Updated this week
- Repository for Meta Chameleon, a mixed-modal early-fusion foundation model from FAIR.☆2,065Updated last year
- State-of-the-art Image & Video CLIP, Multimodal Large Language Models, and More!☆1,728Updated last month
- This repository contains the official implementation of the research papers, "MobileCLIP" CVPR 2024 and "MobileCLIP2" TMLR August 2025☆1,297Updated last month
- A suite of image and video neural tokenizers☆1,678Updated 9 months ago
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.☆3,216Updated 5 months ago
- Autoregressive Model Beats Diffusion: 🦙 Llama for Scalable Image Generation☆1,885Updated last year
- Official repository for "AM-RADIO: Reduce All Domains Into One"☆1,384Updated 3 weeks ago
- Code for the Molmo Vision-Language Model☆793Updated 11 months ago
- VisionLLM Series☆1,122Updated 8 months ago
- Hiera: A fast, powerful, and simple hierarchical vision transformer.☆1,037Updated last year
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆926Updated 3 months ago
- LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.☆563Updated 4 months ago
- PyTorch Implementation of "V* : Guided Visual Search as a Core Mechanism in Multimodal LLMs"☆680Updated last year
- When do we not need larger vision models?☆412Updated 9 months ago
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills☆760Updated last year
- ☆629Updated last year
- A family of lightweight multimodal models.☆1,046Updated 11 months ago
- Next-Token Prediction is All You Need☆2,251Updated 7 months ago
- PyTorch code and models for V-JEPA self-supervised learning from video.☆3,258Updated 8 months ago
- Emu Series: Generative Multimodal Models from BAAI☆1,754Updated last year
- DataComp: In search of the next generation of multimodal datasets☆745Updated 6 months ago
- 【TMM 2025🔥】 Mixture-of-Experts for Large Vision-Language Models☆2,270Updated 3 months ago
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆926Updated last week
- Grounding DINO 1.5: IDEA Research's Most Capable Open-World Object Detection Model Series☆1,052Updated 9 months ago
- [ICLR2025 Spotlight🔥] Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters☆576Updated 9 months ago
- Strong and Open Vision Language Assistant for Mobile Devices☆1,292Updated last year
- VILA is a family of state-of-the-art vision language models (VLMs) for diverse multimodal AI tasks across the edge, data center, and clou…☆3,644Updated 3 weeks ago