showlab / Show-oLinks
[ICLR & NeurIPS 2025] Repository for Show-o series, One Single Transformer to Unify Multimodal Understanding and Generation.
☆1,721Updated last week
Alternatives and similar repositories for Show-o
Users that are interested in Show-o are comparing it to the libraries listed below
Sorting:
- This repo contains the code for 1D tokenizer and generator☆1,046Updated 6 months ago
- Next-Token Prediction is All You Need☆2,206Updated 6 months ago
- SEED-Voken: A Series of Powerful Visual Tokenizers☆951Updated 3 months ago
- Autoregressive Model Beats Diffusion: 🦙 Llama for Scalable Image Generation☆1,872Updated last year
- Pytorch implementation of Transfusion, "Predict the Next Token and Diffuse Images with One Multi-Modal Model", from MetaAI☆1,218Updated 3 months ago
- An official implementation of Flow-GRPO: Training Flow Matching Models via Online RL☆1,373Updated 2 weeks ago
- [CVPR 2025 Oral]Infinity ∞ : Scaling Bitwise AutoRegressive Modeling for High-Resolution Image Synthesis☆1,448Updated 3 months ago
- 📖 This is a repository for organizing papers, codes and other resources related to unified multimodal models.☆697Updated last week
- [CVPR 2025] The First Investigation of CoT Reasoning (RL, TTS, Reflection) in Image Generation☆808Updated 4 months ago
- Official implementation of BLIP3o-Series☆1,498Updated 2 weeks ago
- [ICLR'25 Oral] Representation Alignment for Generation: Training Diffusion Transformers Is Easier Than You Think☆1,340Updated 6 months ago
- [TMLR 2025🔥] A survey for the autoregressive models in vision.☆712Updated last week
- PyTorch implementation of MAR+DiffLoss https://arxiv.org/abs/2406.11838☆1,759Updated last year
- [NeurIPS 2025] MMaDA - Open-Sourced Multimodal Large Diffusion Language Models☆1,400Updated 2 weeks ago
- Official implementation of UnifiedReward & [NeurIPS 2025] UnifiedReward-Think☆555Updated last week
- Video-R1: Reinforcing Video Reasoning in MLLMs [🔥the first paper to explore R1 for video]☆707Updated 2 weeks ago
- Awesome Unified Multimodal Models☆766Updated last month
- [CVPR 2024] Panda-70M: Captioning 70M Videos with Multiple Cross-Modality Teachers☆630Updated 11 months ago
- A fork to add multimodal model training to open-r1☆1,402Updated 7 months ago
- 🔥🔥🔥 A curated list of papers on LLMs-based multimodal generation (image, video, 3D and audio).☆510Updated 6 months ago
- [CVPR2024 Highlight] VBench - We Evaluate Video Generation☆1,229Updated 3 weeks ago
- LaVIT: Empower the Large Language Model to Understand and Generate Visual Content☆594Updated last year
- MM-EUREKA: Exploring the Frontiers of Multimodal Reasoning with Rule-based Reinforcement Learning☆736Updated last month
- Implementation of MagViT2 Tokenizer in Pytorch☆636Updated 8 months ago
- PyTorch implementation of FractalGen https://arxiv.org/abs/2502.17437☆1,173Updated 7 months ago
- Cambrian-1 is a family of multimodal LLMs with a vision-centric design.☆1,954Updated 11 months ago
- [CVPR 2025 Oral] Reconstruction vs. Generation: Taming Optimization Dilemma in Latent Diffusion Models☆1,193Updated 3 months ago
- A collection of awesome video generation studies.☆635Updated 2 weeks ago
- [Survey] Next Token Prediction Towards Multimodal Intelligence: A Comprehensive Survey☆449Updated 8 months ago
- [ECCV 2024] official code for "Long-CLIP: Unlocking the Long-Text Capability of CLIP"☆851Updated last year