cambrian-mllm / cambrian
Cambrian-1 is a family of multimodal LLMs with a vision-centric design.
☆1,901Updated 6 months ago
Alternatives and similar repositories for cambrian
Users that are interested in cambrian are comparing it to the libraries listed below
Sorting:
- Next-Token Prediction is All You Need☆2,115Updated last month
- Repository for Meta Chameleon, a mixed-modal early-fusion foundation model from FAIR.☆1,991Updated 9 months ago
- Autoregressive Model Beats Diffusion: 🦙 Llama for Scalable Image Generation☆1,750Updated 9 months ago
- Mixture-of-Experts for Large Vision-Language Models☆2,154Updated 5 months ago
- ☆3,780Updated last week
- Emu Series: Generative Multimodal Models from BAAI☆1,720Updated 7 months ago
- VideoSys: An easy and efficient system for video generation☆1,963Updated 2 months ago
- [ICLR 2025] Repository for Show-o, One Single Transformer to Unify Multimodal Understanding and Generation.☆1,396Updated 2 weeks ago
- A family of lightweight multimodal models.☆1,016Updated 5 months ago
- LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models (ECCV 2024)☆806Updated 9 months ago
- [TMLR 2025] Latte: Latent Diffusion Transformer for Video Generation.☆1,825Updated last month
- This repository provides the code and model checkpoints for AIMv1 and AIMv2 research projects.☆1,281Updated 3 weeks ago
- MiniSora: A community aims to explore the implementation path and future development direction of Sora.☆1,267Updated 2 months ago
- Open-source evaluation toolkit of large multi-modality models (LMMs), support 220+ LMMs, 80+ benchmarks☆2,358Updated this week
- ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Expert…☆1,435Updated 2 months ago
- VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs☆1,159Updated 3 months ago
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills☆740Updated last year
- A novel Multimodal Large Language Model (MLLM) architecture, designed to structurally align visual and textual embeddings.☆904Updated last month
- Pytorch implementation of Transfusion, "Predict the Next Token and Diffuse Images with One Multi-Modal Model", from MetaAI☆1,093Updated this week
- Eagle Family: Exploring Model Designs, Data Recipes and Training Strategies for Frontier-Class Multimodal LLMs☆768Updated 2 weeks ago
- VILA is a family of state-of-the-art vision language models (VLMs) for diverse multimodal AI tasks across the edge, data center, and clou…☆3,232Updated last week
- A Framework of Small-scale Large Multimodal Models☆817Updated 2 weeks ago
- The official GitHub page for the review paper "Sora: A Review on Background, Technology, Limitations, and Opportunities of Large Vision M…☆498Updated last year
- ☆1,807Updated 10 months ago
- 🔥🔥 LLaVA++: Extending LLaVA with Phi-3 and LLaMA-3 (LLaVA LLaMA-3, LLaVA Phi-3)☆834Updated 10 months ago
- VisionLLM Series☆1,059Updated 2 months ago
- 【ICLR 2024🔥】 Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment☆805Updated last year
- SEED-Voken: A Series of Powerful Visual Tokenizers☆878Updated 2 months ago
- ☆609Updated last year
- Accelerating the development of large multimodal models (LMMs) with one-click evaluation module - lmms-eval.☆2,440Updated last week