showlab / Show-oLinks
[ICLR & NeurIPS 2025] Repository for Show-o series, One Single Transformer to Unify Multimodal Understanding and Generation.
β1,815Updated last month
Alternatives and similar repositories for Show-o
Users that are interested in Show-o are comparing it to the libraries listed below
Sorting:
- Autoregressive Model Beats Diffusion: π¦ Llama for Scalable Image Generationβ1,910Updated last year
- This repo contains the code for 1D tokenizer and generatorβ1,083Updated 8 months ago
- SEED-Voken: A Series of Powerful Visual Tokenizersβ984Updated 3 weeks ago
- [CVPR 2025 Oral]Infinity β : Scaling Bitwise AutoRegressive Modeling for High-Resolution Image Synthesisβ1,524Updated last month
- [CVPR 2025] The First Investigation of CoT Reasoning (RL, TTS, Reflection) in Image Generationβ841Updated 6 months ago
- Next-Token Prediction is All You Needβ2,265Updated 3 weeks ago
- Pytorch implementation of Transfusion, "Predict the Next Token and Diffuse Images with One Multi-Modal Model", from MetaAIβ1,279Updated last week
- π This is a repository for organizing papers, codes and other resources related to unified multimodal models.β767Updated 2 months ago
- [NeurIPS 2025] An official implementation of Flow-GRPO: Training Flow Matching Models via Online RLβ1,757Updated last month
- Official implementation of BLIP3o-Seriesβ1,609Updated 2 weeks ago
- [TMLR 2025π₯] A survey for the autoregressive models in vision.β765Updated last month
- Video-R1: Reinforcing Video Reasoning in MLLMs [π₯the first paper to explore R1 for video]β777Updated this week
- Official implementation of UnifiedReward & [NeurIPS 2025] UnifiedReward-Thinkβ642Updated last week
- PyTorch implementation of MAR+DiffLoss https://arxiv.org/abs/2406.11838β1,814Updated last year
- [NeurIPS 2025] MMaDA - Open-Sourced Multimodal Large Diffusion Language Modelsβ1,526Updated last month
- [ICLR'25 Oral] Representation Alignment for Generation: Training Diffusion Transformers Is Easier Than You Thinkβ1,454Updated 9 months ago
- Awesome Unified Multimodal Modelsβ950Updated 4 months ago
- [CVPR 2024] Panda-70M: Captioning 70M Videos with Multiple Cross-Modality Teachersβ655Updated last year
- A fork to add multimodal model training to open-r1β1,429Updated 10 months ago
- MM-EUREKA: Exploring the Frontiers of Multimodal Reasoning with Rule-based Reinforcement Learningβ763Updated 3 months ago
- Anole: An Open, Autoregressive and Native Multimodal Models for Interleaved Image-Text Generationβ815Updated 6 months ago
- π₯π₯π₯ A curated list of papers on LLMs-based multimodal generation (image, video, 3D and audio).β528Updated 8 months ago
- [CVPR2024 Highlight] VBench - We Evaluate Video Generationβ1,364Updated last week
- Cambrian-1 is a family of multimodal LLMs with a vision-centric design.β1,974Updated last month
- LaVIT: Empower the Large Language Model to Understand and Generate Visual Contentβ599Updated last year
- [ICLR 2025] Autoregressive Video Generation without Vector Quantizationβ604Updated last month
- A family of lightweight multimodal models.β1,048Updated last year
- Implementation of MagViT2 Tokenizer in Pytorchβ654Updated 11 months ago
- β¨β¨[CVPR 2025] Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysisβ695Updated last week
- [Survey] Next Token Prediction Towards Multimodal Intelligence: A Comprehensive Surveyβ462Updated 11 months ago