showlab / Show-oLinks
[ICLR & NeurIPS 2025] Repository for Show-o series, One Single Transformer to Unify Multimodal Understanding and Generation.
☆1,847Updated this week
Alternatives and similar repositories for Show-o
Users that are interested in Show-o are comparing it to the libraries listed below
Sorting:
- SEED-Voken: A Series of Powerful Visual Tokenizers☆987Updated last month
- This repo contains the code for 1D tokenizer and generator☆1,094Updated 9 months ago
- 📖 This is a repository for organizing papers, codes and other resources related to unified multimodal models.☆779Updated 3 months ago
- Autoregressive Model Beats Diffusion: 🦙 Llama for Scalable Image Generation☆1,918Updated last year
- Next-Token Prediction is All You Need☆2,274Updated this week
- [NeurIPS 2025] An official implementation of Flow-GRPO: Training Flow Matching Models via Online RL☆1,871Updated 2 months ago
- [CVPR 2025 Oral]Infinity ∞ : Scaling Bitwise AutoRegressive Modeling for High-Resolution Image Synthesis☆1,536Updated 2 months ago
- Pytorch implementation of Transfusion, "Predict the Next Token and Diffuse Images with One Multi-Modal Model", from MetaAI☆1,305Updated last week
- Official implementation of BLIP3o-Series☆1,617Updated last month
- [CVPR 2025] The First Investigation of CoT Reasoning (RL, TTS, Reflection) in Image Generation☆848Updated 7 months ago
- Awesome Unified Multimodal Models☆1,026Updated 4 months ago
- PyTorch implementation of MAR+DiffLoss https://arxiv.org/abs/2406.11838☆1,838Updated last year
- [TMLR 2025🔥] A survey for the autoregressive models in vision.☆777Updated 2 months ago
- Video-R1: Reinforcing Video Reasoning in MLLMs [🔥the first paper to explore R1 for video]☆794Updated 3 weeks ago
- Official implementation of UnifiedReward & [NeurIPS 2025] UnifiedReward-Think☆660Updated last week
- [ICLR'25 Oral] Representation Alignment for Generation: Training Diffusion Transformers Is Easier Than You Think☆1,501Updated 9 months ago
- MMaDA - Open-Sourced Multimodal Large Diffusion Language Models☆1,549Updated last month
- A fork to add multimodal model training to open-r1☆1,435Updated 11 months ago
- MM-EUREKA: Exploring the Frontiers of Multimodal Reasoning with Rule-based Reinforcement Learning☆766Updated 4 months ago
- [CVPR 2024] Panda-70M: Captioning 70M Videos with Multiple Cross-Modality Teachers☆662Updated last year
- [CVPR2024 Highlight] VBench - We Evaluate Video Generation☆1,418Updated this week
- LaVIT: Empower the Large Language Model to Understand and Generate Visual Content☆601Updated last year
- 🔥🔥🔥 A curated list of papers on LLMs-based multimodal generation (image, video, 3D and audio).☆537Updated 9 months ago
- Implementation of MagViT2 Tokenizer in Pytorch☆657Updated last year
- [Survey] Next Token Prediction Towards Multimodal Intelligence: A Comprehensive Survey☆468Updated 11 months ago
- [CVPR 2025 Oral] Reconstruction vs. Generation: Taming Optimization Dilemma in Latent Diffusion Models☆1,357Updated 3 weeks ago
- Official PyTorch Implementation of "Diffusion Transformers with Representation Autoencoders"☆1,681Updated 2 weeks ago
- Cambrian-1 is a family of multimodal LLMs with a vision-centric design.☆1,977Updated 2 months ago
- This is the first paper to explore how to effectively use R1-like RL for MLLMs and introduce Vision-R1, a reasoning MLLM that leverages …☆748Updated 4 months ago
- PyTorch implementation of FractalGen https://arxiv.org/abs/2502.17437☆1,215Updated 10 months ago