showlab / Show-oLinks
[ICLR 2025] Repository for Show-o series, One Single Transformer to Unify Multimodal Understanding and Generation.
☆1,568Updated this week
Alternatives and similar repositories for Show-o
Users that are interested in Show-o are comparing it to the libraries listed below
Sorting:
- This repo contains the code for 1D tokenizer and generator☆932Updated 3 months ago
- SEED-Voken: A Series of Powerful Visual Tokenizers☆903Updated last week
- 📖 This is a repository for organizing papers, codes and other resources related to unified multimodal models.☆605Updated last week
- [CVPR 2025] The First Investigation of CoT Reasoning (RL, TTS, Reflection) in Image Generation☆748Updated last month
- Next-Token Prediction is All You Need☆2,162Updated 3 months ago
- Pytorch implementation of Transfusion, "Predict the Next Token and Diffuse Images with One Multi-Modal Model", from MetaAI☆1,167Updated 3 weeks ago
- Autoregressive Model Beats Diffusion: 🦙 Llama for Scalable Image Generation☆1,791Updated 10 months ago
- [CVPR 2025 Oral]Infinity ∞ : Scaling Bitwise AutoRegressive Modeling for High-Resolution Image Synthesis☆1,354Updated 2 weeks ago
- PyTorch implementation of MAR+DiffLoss https://arxiv.org/abs/2406.11838☆1,635Updated 9 months ago
- [TMLR 2025🔥] A survey for the autoregressive models in vision.☆641Updated this week
- LaVIT: Empower the Large Language Model to Understand and Generate Visual Content☆584Updated 9 months ago
- [ICLR 2025] Autoregressive Video Generation without Vector Quantization☆544Updated last month
- Implementation of MagViT2 Tokenizer in Pytorch☆613Updated 5 months ago
- Official implementation of UnifiedReward & UnifiedReward-Think☆446Updated 3 weeks ago
- An official implementation of Flow-GRPO: Training Flow Matching Models via Online RL☆825Updated 3 weeks ago
- [ICLR'25 Oral] Representation Alignment for Generation: Training Diffusion Transformers Is Easier Than You Think☆1,179Updated 3 months ago
- Official Implementation of "Lumina-mGPT: Illuminate Flexible Photorealistic Text-to-Image Generation with Multimodal Generative Pretraini…☆603Updated 3 months ago
- A fork to add multimodal model training to open-r1☆1,324Updated 5 months ago
- A family of lightweight multimodal models.☆1,024Updated 7 months ago
- [CVPR2024 Highlight] VBench - We Evaluate Video Generation☆1,076Updated last week
- Video-R1: Reinforcing Video Reasoning in MLLMs [🔥the first paper to explore R1 for video]☆601Updated last month
- [CVPR 2024] Panda-70M: Captioning 70M Videos with Multiple Cross-Modality Teachers☆607Updated 8 months ago
- [ECCV 2024] official code for "Long-CLIP: Unlocking the Long-Text Capability of CLIP"☆823Updated 10 months ago
- MM-EUREKA: Exploring the Frontiers of Multimodal Reasoning with Rule-based Reinforcement Learning☆679Updated 2 weeks ago
- [Survey] Next Token Prediction Towards Multimodal Intelligence: A Comprehensive Survey☆445Updated 5 months ago
- Liquid: Language Models are Scalable and Unified Multi-modal Generators☆597Updated 3 months ago
- 🔥 Sa2VA: Marrying SAM2 with LLaVA for Dense Grounded Understanding of Images and Videos☆1,163Updated last week
- ✨✨[CVPR 2025] Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysis☆592Updated 2 months ago
- A One-for-All Multimodal Evaluation Toolkit Across Text, Image, Video, and Audio Tasks☆2,711Updated this week
- Official repository for the paper PLLaVA☆659Updated 11 months ago