apple / ml-aim
This repository provides the code and model checkpoints for AIMv1 and AIMv2 research projects.
☆1,267Updated 4 months ago
Alternatives and similar repositories for ml-aim:
Users that are interested in ml-aim are comparing it to the libraries listed below
- ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Expert…☆1,420Updated last month
- 4M: Massively Multimodal Masked Modeling☆1,713Updated last month
- Hiera: A fast, powerful, and simple hierarchical vision transformer.☆971Updated last year
- This repository contains the official implementation of the research paper, "MobileCLIP: Fast Image-Text Models through Multi-Modal Reinf…☆887Updated 4 months ago
- When do we not need larger vision models?☆387Updated 2 months ago
- Autoregressive Model Beats Diffusion: 🦙 Llama for Scalable Image Generation☆1,708Updated 8 months ago
- Official repository for "AM-RADIO: Reduce All Domains Into One"☆1,105Updated this week
- Cambrian-1 is a family of multimodal LLMs with a vision-centric design.☆1,886Updated 5 months ago
- ☆607Updated last year
- Repository for Meta Chameleon, a mixed-modal early-fusion foundation model from FAIR.☆1,980Updated 8 months ago
- VisionLLM Series☆1,044Updated last month
- DataComp: In search of the next generation of multimodal datasets☆699Updated last year
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills☆739Updated last year
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆864Updated 4 months ago
- A suite of image and video neural tokenizers☆1,612Updated 2 months ago
- This repo contains the code for 1D tokenizer and generator☆832Updated 3 weeks ago
- LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.☆505Updated 3 weeks ago
- [ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptation☆763Updated 6 months ago
- PyTorch Implementation of "V* : Guided Visual Search as a Core Mechanism in Multimodal LLMs"☆585Updated last year
- Pytorch implementation of Transfusion, "Predict the Next Token and Diffuse Images with One Multi-Modal Model", from MetaAI☆1,052Updated last month
- [ECCV2024] VideoMamba: State Space Model for Efficient Video Understanding☆936Updated 9 months ago
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.☆2,815Updated last month
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆862Updated 2 months ago
- The official repo for the paper "VeCLIP: Improving CLIP Training via Visual-enriched Captions"☆242Updated 2 months ago
- Mixture-of-Experts for Large Vision-Language Models☆2,145Updated 4 months ago
- VisionLLaMA: A Unified LLaMA Backbone for Vision Tasks☆386Updated 9 months ago
- A family of lightweight multimodal models.☆1,013Updated 5 months ago
- Next-Token Prediction is All You Need☆2,076Updated last month
- LLaVA-Interactive-Demo☆368Updated 8 months ago
- VILA is a family of state-of-the-art vision language models (VLMs) for diverse multimodal AI tasks across the edge, data center, and clou…☆3,138Updated 3 weeks ago