apple / ml-aimLinks
This repository provides the code and model checkpoints for AIMv1 and AIMv2 research projects.
☆1,290Updated last month
Alternatives and similar repositories for ml-aim
Users that are interested in ml-aim are comparing it to the libraries listed below
Sorting:
- ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Expert…☆1,447Updated 2 months ago
- 4M: Massively Multimodal Masked Modeling☆1,721Updated last week
- State-of-the-art Image & Video CLIP, Multimodal Large Language Models, and More!☆1,172Updated this week
- LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.☆517Updated 2 months ago
- Cambrian-1 is a family of multimodal LLMs with a vision-centric design.☆1,908Updated 7 months ago
- Official repository for "AM-RADIO: Reduce All Domains Into One"☆1,166Updated last week
- Autoregressive Model Beats Diffusion: 🦙 Llama for Scalable Image Generation☆1,761Updated 9 months ago
- Hiera: A fast, powerful, and simple hierarchical vision transformer.☆985Updated last year
- VisionLLM Series☆1,066Updated 3 months ago
- A suite of image and video neural tokenizers☆1,627Updated 3 months ago
- When do we not need larger vision models?☆393Updated 3 months ago
- DataComp: In search of the next generation of multimodal datasets☆710Updated last month
- Famous Vision Language Models and Their Architectures☆843Updated 3 months ago
- Next-Token Prediction is All You Need☆2,134Updated 2 months ago
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆883Updated 6 months ago
- This repository contains the official implementation of the research paper, "MobileCLIP: Fast Image-Text Models through Multi-Modal Reinf…☆941Updated 6 months ago
- [ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptation☆789Updated 8 months ago
- Anole: An Open, Autoregressive and Native Multimodal Models for Interleaved Image-Text Generation☆765Updated 9 months ago
- The official repo for the paper "VeCLIP: Improving CLIP Training via Visual-enriched Captions"☆244Updated 4 months ago
- Repository for Meta Chameleon, a mixed-modal early-fusion foundation model from FAIR.☆2,005Updated 10 months ago
- [ICLR 2025] Repository for Show-o, One Single Transformer to Unify Multimodal Understanding and Generation.☆1,417Updated last month
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills☆743Updated last year
- This repo contains the code for 1D tokenizer and generator☆887Updated 2 months ago
- Eagle Family: Exploring Model Designs, Data Recipes and Training Strategies for Frontier-Class Multimodal LLMs☆779Updated last month
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.☆2,900Updated 2 weeks ago
- PyTorch Implementation of "V* : Guided Visual Search as a Core Mechanism in Multimodal LLMs"☆614Updated last year
- SEED-Voken: A Series of Powerful Visual Tokenizers☆885Updated 3 months ago
- LLaVA-Interactive-Demo☆371Updated 10 months ago
- ☆612Updated last year
- A family of lightweight multimodal models.☆1,018Updated 6 months ago