showlab / Show-o
[ICLR 2025] Repository for Show-o, One Single Transformer to Unify Multimodal Understanding and Generation.
☆1,284Updated this week
Alternatives and similar repositories for Show-o:
Users that are interested in Show-o are comparing it to the libraries listed below
- SEED-Voken: A Series of Powerful Visual Tokenizers☆849Updated last month
- This repo contains the code for 1D tokenizer and generator☆769Updated last week
- Pytorch implementation of Transfusion, "Predict the Next Token and Diffuse Images with One Multi-Modal Model", from MetaAI☆985Updated last week
- Infinity ∞ : Scaling Bitwise AutoRegressive Modeling for High-Resolution Image Synthesis☆1,031Updated last month
- Next-Token Prediction is All You Need☆2,042Updated last week
- 📖 This is a repository for organizing papers, codes and other resources related to unified multimodal models.☆422Updated 2 weeks ago
- Autoregressive Model Beats Diffusion: 🦙 Llama for Scalable Image Generation☆1,623Updated 7 months ago
- PyTorch implementation of MAR+DiffLoss https://arxiv.org/abs/2406.11838☆1,371Updated 6 months ago
- LaVIT: Empower the Large Language Model to Understand and Generate Visual Content☆570Updated 5 months ago
- [ICLR'25 Oral] Representation Alignment for Generation: Training Diffusion Transformers Is Easier Than You Think☆887Updated last week
- Investigating CoT Reasoning in Autoregressive Image Generation☆559Updated this week
- Implementation of MagViT2 Tokenizer in Pytorch☆597Updated 2 months ago
- [ECCV 2024] official code for "Long-CLIP: Unlocking the Long-Text Capability of CLIP"☆771Updated 7 months ago
- A fork to add multimodal model training to open-r1☆1,108Updated last month
- [CVPR 2024] Panda-70M: Captioning 70M Videos with Multiple Cross-Modality Teachers☆586Updated 5 months ago
- 🔥🔥🔥 A curated list of papers on LLMs-based multimodal generation (image, video, 3D and audio).☆441Updated last week
- 🔥 Sa2VA: Marrying SAM2 with LLaVA for Dense Grounded Understanding of Images and Videos☆981Updated last week
- Accelerating the development of large multimodal models (LMMs) with one-click evaluation module - lmms-eval.☆2,242Updated this week
- Explore the Multimodal “Aha Moment” on 2B Model☆524Updated last week
- Official PyTorch Implementation of "SiT: Exploring Flow and Diffusion-based Generative Models with Scalable Interpolant Transformers"☆790Updated last year
- A family of lightweight multimodal models.☆1,006Updated 4 months ago
- [TMLR 2025🔥] A survey for the autoregressive models in vision.☆448Updated this week
- PyTorch implementation of RCG https://arxiv.org/abs/2312.03701☆908Updated 6 months ago
- Eagle Family: Exploring Model Designs, Data Recipes and Training Strategies for Frontier-Class Multimodal LLMs☆635Updated last month
- [ICLR 2024 Spotlight] DreamLLM: Synergistic Multimodal Comprehension and Creation☆427Updated 3 months ago
- Official Implementation of "Lumina-mGPT: Illuminate Flexible Photorealistic Text-to-Image Generation with Multimodal Generative Pretraini…☆548Updated 7 months ago
- 【ICLR 2024🔥】 Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment☆795Updated last year
- Cambrian-1 is a family of multimodal LLMs with a vision-centric design.☆1,877Updated 4 months ago
- PyTorch implementation of FractalGen https://arxiv.org/abs/2502.17437☆995Updated last month
- [Survey] Next Token Prediction Towards Multimodal Intelligence: A Comprehensive Survey☆398Updated 2 months ago