showlab / Show-oLinks
[ICLR 2025] Repository for Show-o series, One Single Transformer to Unify Multimodal Understanding and Generation.
☆1,623Updated this week
Alternatives and similar repositories for Show-o
Users that are interested in Show-o are comparing it to the libraries listed below
Sorting:
- This repo contains the code for 1D tokenizer and generator☆963Updated 4 months ago
- SEED-Voken: A Series of Powerful Visual Tokenizers☆920Updated last month
- Autoregressive Model Beats Diffusion: 🦙 Llama for Scalable Image Generation☆1,816Updated 11 months ago
- [CVPR 2025 Oral]Infinity ∞ : Scaling Bitwise AutoRegressive Modeling for High-Resolution Image Synthesis☆1,386Updated last month
- Next-Token Prediction is All You Need☆2,173Updated 4 months ago
- Pytorch implementation of Transfusion, "Predict the Next Token and Diffuse Images with One Multi-Modal Model", from MetaAI☆1,181Updated last month
- 📖 This is a repository for organizing papers, codes and other resources related to unified multimodal models.☆636Updated last week
- Official implementation of BLIP3o☆1,300Updated last week
- PyTorch implementation of MAR+DiffLoss https://arxiv.org/abs/2406.11838☆1,682Updated 10 months ago
- An official implementation of Flow-GRPO: Training Flow Matching Models via Online RL☆958Updated this week
- [CVPR 2025] The First Investigation of CoT Reasoning (RL, TTS, Reflection) in Image Generation☆772Updated 2 months ago
- MMaDA - Open-Sourced Multimodal Large Diffusion Language Models☆1,218Updated last month
- [ICLR'25 Oral] Representation Alignment for Generation: Training Diffusion Transformers Is Easier Than You Think☆1,224Updated 4 months ago
- [TMLR 2025🔥] A survey for the autoregressive models in vision.☆660Updated this week
- LaVIT: Empower the Large Language Model to Understand and Generate Visual Content☆585Updated 9 months ago
- [CVPR 2024] Panda-70M: Captioning 70M Videos with Multiple Cross-Modality Teachers☆611Updated 9 months ago
- Implementation of MagViT2 Tokenizer in Pytorch☆619Updated 6 months ago
- Video-R1: Reinforcing Video Reasoning in MLLMs [🔥the first paper to explore R1 for video]☆642Updated this week
- [CVPR2024 Highlight] VBench - We Evaluate Video Generation☆1,114Updated last week
- Official PyTorch Implementation of "SiT: Exploring Flow and Diffusion-based Generative Models with Scalable Interpolant Transformers"☆924Updated last year
- A family of lightweight multimodal models.☆1,023Updated 8 months ago
- [CVPR 2025 Oral] Reconstruction vs. Generation: Taming Optimization Dilemma in Latent Diffusion Models☆1,062Updated last month
- Official implementation of UnifiedReward & UnifiedReward-Think☆485Updated last week
- Awesome Unified Multimodal Models☆498Updated 3 weeks ago
- ✨✨[CVPR 2025] Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysis☆608Updated 2 months ago
- [ECCV 2024] official code for "Long-CLIP: Unlocking the Long-Text Capability of CLIP"☆836Updated 11 months ago
- [ICLR 2025] Autoregressive Video Generation without Vector Quantization☆552Updated 2 weeks ago
- Cambrian-1 is a family of multimodal LLMs with a vision-centric design.☆1,932Updated 8 months ago
- MM-EUREKA: Exploring the Frontiers of Multimodal Reasoning with Rule-based Reinforcement Learning☆712Updated this week
- Anole: An Open, Autoregressive and Native Multimodal Models for Interleaved Image-Text Generation☆776Updated last month