showlab / Show-oLinks
[ICLR 2025] Repository for Show-o series, One Single Transformer to Unify Multimodal Understanding and Generation.
β1,689Updated last week
Alternatives and similar repositories for Show-o
Users that are interested in Show-o are comparing it to the libraries listed below
Sorting:
- This repo contains the code for 1D tokenizer and generatorβ1,023Updated 5 months ago
- Autoregressive Model Beats Diffusion: π¦ Llama for Scalable Image Generationβ1,858Updated last year
- Next-Token Prediction is All You Needβ2,195Updated 5 months ago
- SEED-Voken: A Series of Powerful Visual Tokenizersβ935Updated 2 months ago
- [CVPR 2025 Oral]Infinity β : Scaling Bitwise AutoRegressive Modeling for High-Resolution Image Synthesisβ1,424Updated 2 months ago
- π This is a repository for organizing papers, codes and other resources related to unified multimodal models.β681Updated last month
- Pytorch implementation of Transfusion, "Predict the Next Token and Diffuse Images with One Multi-Modal Model", from MetaAIβ1,201Updated 2 months ago
- An official implementation of Flow-GRPO: Training Flow Matching Models via Online RLβ1,273Updated this week
- [CVPR 2025] The First Investigation of CoT Reasoning (RL, TTS, Reflection) in Image Generationβ795Updated 3 months ago
- PyTorch implementation of MAR+DiffLoss https://arxiv.org/abs/2406.11838β1,724Updated 11 months ago
- Official implementation of BLIP3o-Seriesβ1,468Updated last week
- [ICLR'25 Oral] Representation Alignment for Generation: Training Diffusion Transformers Is Easier Than You Thinkβ1,293Updated 5 months ago
- [TMLR 2025π₯] A survey for the autoregressive models in vision.β693Updated this week
- MMaDA - Open-Sourced Multimodal Large Diffusion Language Modelsβ1,341Updated 3 weeks ago
- Video-R1: Reinforcing Video Reasoning in MLLMs [π₯the first paper to explore R1 for video]β685Updated this week
- [CVPR 2024] Panda-70M: Captioning 70M Videos with Multiple Cross-Modality Teachersβ621Updated 10 months ago
- Official implementation of UnifiedReward & UnifiedReward-Thinkβ531Updated last week
- Awesome Unified Multimodal Modelsβ671Updated 3 weeks ago
- A fork to add multimodal model training to open-r1β1,387Updated 7 months ago
- LaVIT: Empower the Large Language Model to Understand and Generate Visual Contentβ590Updated 11 months ago
- Implementation of MagViT2 Tokenizer in Pytorchβ630Updated 8 months ago
- [CVPR2024 Highlight] VBench - We Evaluate Video Generationβ1,206Updated this week
- π₯π₯ π₯ A curated list of papers on LLMs-based multimodal generation (image, video, 3D and audio).β507Updated 5 months ago
- β¨β¨[CVPR 2025] Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysisβ635Updated 3 weeks ago
- PyTorch implementation of FractalGen https://arxiv.org/abs/2502.17437β1,159Updated 6 months ago
- [CVPR 2025 Oral] Reconstruction vs. Generation: Taming Optimization Dilemma in Latent Diffusion Modelsβ1,166Updated 3 months ago
- [ICLR 2025] Autoregressive Video Generation without Vector Quantizationβ567Updated last month
- Official PyTorch Implementation of "SiT: Exploring Flow and Diffusion-based Generative Models with Scalable Interpolant Transformers"β959Updated last year
- [Survey] Next Token Prediction Towards Multimodal Intelligence: A Comprehensive Surveyβ447Updated 7 months ago
- Official Implementation of "Lumina-mGPT: Illuminate Flexible Photorealistic Text-to-Image Generation with Multimodal Generative Pretrainiβ¦β622Updated 5 months ago