showlab / Show-oLinks
[ICLR 2025] Repository for Show-o, One Single Transformer to Unify Multimodal Understanding and Generation.
☆1,417Updated last month
Alternatives and similar repositories for Show-o
Users that are interested in Show-o are comparing it to the libraries listed below
Sorting:
- This repo contains the code for 1D tokenizer and generator☆887Updated 2 months ago
- SEED-Voken: A Series of Powerful Visual Tokenizers☆885Updated 3 months ago
- Autoregressive Model Beats Diffusion: 🦙 Llama for Scalable Image Generation☆1,761Updated 9 months ago
- [CVPR 2025] The First Investigation of CoT Reasoning (RL, TTS, Reflection) in Image Generation☆691Updated last week
- 📖 This is a repository for organizing papers, codes and other resources related to unified multimodal models.☆554Updated last month
- [CVPR 2025 Oral]Infinity ∞ : Scaling Bitwise AutoRegressive Modeling for High-Resolution Image Synthesis☆1,286Updated last month
- Next-Token Prediction is All You Need☆2,134Updated 2 months ago
- PyTorch implementation of MAR+DiffLoss https://arxiv.org/abs/2406.11838☆1,576Updated 8 months ago
- [TMLR 2025🔥] A survey for the autoregressive models in vision.☆618Updated this week
- [ICLR'25 Oral] Representation Alignment for Generation: Training Diffusion Transformers Is Easier Than You Think☆1,062Updated 2 months ago
- Pytorch implementation of Transfusion, "Predict the Next Token and Diffuse Images with One Multi-Modal Model", from MetaAI☆1,123Updated 2 weeks ago
- An official implementation of Flow-GRPO: Training Flow Matching Models via Online RL☆641Updated last week
- LaVIT: Empower the Large Language Model to Understand and Generate Visual Content☆580Updated 7 months ago
- Implementation of MagViT2 Tokenizer in Pytorch☆603Updated 4 months ago
- A fork to add multimodal model training to open-r1☆1,272Updated 3 months ago
- Official PyTorch Implementation of "SiT: Exploring Flow and Diffusion-based Generative Models with Scalable Interpolant Transformers"☆847Updated last year
- Video-R1: Reinforcing Video Reasoning in MLLMs [🔥the first paper to explore R1 for video]☆535Updated 2 weeks ago
- [ECCV 2024] official code for "Long-CLIP: Unlocking the Long-Text Capability of CLIP"☆812Updated 9 months ago
- [CVPR 2025 Oral & Best Paper Award Candidate] Reconstruction vs. Generation: Taming Optimization Dilemma in Latent Diffusion Models☆824Updated last week
- [CVPR 2024] Panda-70M: Captioning 70M Videos with Multiple Cross-Modality Teachers☆602Updated 7 months ago
- Official implementation of UnifiedReward & UnifiedReward-Think☆382Updated last week
- [Survey] Next Token Prediction Towards Multimodal Intelligence: A Comprehensive Survey☆443Updated 4 months ago
- [ICLR 2025] Autoregressive Video Generation without Vector Quantization☆509Updated last week
- A family of lightweight multimodal models.☆1,018Updated 6 months ago
- Liquid: Language Models are Scalable and Unified Multi-modal Generators☆579Updated last month
- A reading list of video generation☆578Updated this week
- MM-EUREKA: Exploring the Frontiers of Multimodal Reasoning with Rule-based Reinforcement Learning☆619Updated last week
- LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.☆517Updated 2 months ago
- Accelerating the development of large multimodal models (LMMs) with one-click evaluation module - lmms-eval.☆2,515Updated this week
- This is the first paper to explore how to effectively use RL for MLLMs and introduce Vision-R1, a reasoning MLLM that leverages cold-sta…☆573Updated 3 weeks ago