cambrian-mllm / cambrianView external linksLinks
Cambrian-1 is a family of multimodal LLMs with a vision-centric design.
☆1,985Nov 7, 2025Updated 3 months ago
Alternatives and similar repositories for cambrian
Users that are interested in cambrian are comparing it to the libraries listed below
Sorting:
- ☆4,552Sep 14, 2025Updated 5 months ago
- Autoregressive Model Beats Diffusion: 🦙 Llama for Scalable Image Generation☆1,932Aug 15, 2024Updated last year
- Repository for Meta Chameleon, a mixed-modal early-fusion foundation model from FAIR.☆2,085Jul 29, 2024Updated last year
- Next-Token Prediction is All You Need☆2,339Jan 12, 2026Updated last month
- InternLM-XComposer2.5-OmniLive: A Comprehensive Multimodal System for Long-term Streaming Video and Audio Interactions☆2,919May 26, 2025Updated 8 months ago
- VILA is a family of state-of-the-art vision language models (VLMs) for diverse multimodal AI tasks across the edge, data center, and clou…☆3,754Nov 28, 2025Updated 2 months ago
- EVE Series: Encoder-Free Vision-Language Models from BAAI☆367Jul 24, 2025Updated 6 months ago
- [CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. 接近GPT-4o表现的开源多模态对话模型☆9,792Sep 22, 2025Updated 4 months ago
- [ICLR & NeurIPS 2025] Repository for Show-o series, One Single Transformer to Unify Multimodal Understanding and Generation.☆1,876Jan 8, 2026Updated last month
- A fork to add multimodal model training to open-r1☆1,449Feb 8, 2025Updated last year
- Emu Series: Generative Multimodal Models from BAAI☆1,765Jan 12, 2026Updated last month
- Open-source evaluation toolkit of large multi-modality models (LMMs), support 220+ LMMs, 80+ benchmarks☆3,816Updated this week
- ☆360Jan 27, 2024Updated 2 years ago
- EVA Series: Visual Representation Fantasies from BAAI☆2,648Aug 1, 2024Updated last year
- Official repo for "Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models"☆3,334May 4, 2024Updated last year
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.☆24,446Aug 12, 2024Updated last year
- Long Context Transfer from Language to Vision☆400Mar 18, 2025Updated 10 months ago
- Official repo and evaluation implementation of VSI-Bench☆670Aug 5, 2025Updated 6 months ago
- Latest Advances on Multimodal Large Language Models☆17,337Feb 7, 2026Updated last week
- One-for-All Multimodal Evaluation Toolkit Across Text, Image, Video, and Audio Tasks☆3,635Updated this week
- NeurIPS 2025 Spotlight; ICLR2024 Spotlight; CVPR 2024; EMNLP 2024☆1,812Nov 27, 2025Updated 2 months ago
- SEED-Voken: A Series of Powerful Visual Tokenizers☆993Nov 25, 2025Updated 2 months ago
- LAVIS - A One-stop Library for Language-Vision Intelligence☆11,166Nov 18, 2024Updated last year
- 【TMM 2025🔥】 Mixture-of-Experts for Large Vision-Language Models☆2,300Jul 15, 2025Updated 7 months ago
- Solve Visual Understanding with Reinforced VLMs☆5,833Oct 21, 2025Updated 3 months ago
- This repository provides the code and model checkpoints for AIMv1 and AIMv2 research projects.☆1,397Aug 4, 2025Updated 6 months ago
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆945Aug 5, 2025Updated 6 months ago
- [NeurIPS 2024 Best Paper Award][GPT beats diffusion🔥] [scaling laws in visual generation📈] Official impl. of "Visual Autoregressive Mod…☆8,614Nov 10, 2025Updated 3 months ago
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model☆281Jun 25, 2024Updated last year
- Official PyTorch Implementation of "Scalable Diffusion Models with Transformers"☆8,352May 31, 2024Updated last year
- [ICLR 2024 Spotlight] DreamLLM: Synergistic Multimodal Comprehension and Creation☆458Dec 2, 2024Updated last year
- Seed1.5-VL, a vision-language foundation model designed to advance general-purpose multimodal understanding and reasoning, achieving stat…☆1,544Jun 14, 2025Updated 8 months ago
- Witness the aha moment of VLM with less than $3.☆4,029May 19, 2025Updated 8 months ago
- State-of-the-art Image & Video CLIP, Multimodal Large Language Models, and More!☆2,159Updated this week
- VisionLLaMA: A Unified LLaMA Backbone for Vision Tasks☆390Jul 9, 2024Updated last year
- 4M: Massively Multimodal Masked Modeling☆1,789Jun 2, 2025Updated 8 months ago
- The official repo of Qwen-VL (通义千问-VL) chat & pretrained large vision language model proposed by Alibaba Cloud.☆6,526Aug 7, 2024Updated last year
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.☆3,355May 19, 2025Updated 8 months ago
- Eagle: Frontier Vision-Language Models with Data-Centric Strategies☆927Oct 25, 2025Updated 3 months ago