showlab / Show-o
[ICLR 2025] Repository for Show-o, One Single Transformer to Unify Multimodal Understanding and Generation.
β1,220Updated last week
Alternatives and similar repositories for Show-o:
Users that are interested in Show-o are comparing it to the libraries listed below
- SEED-Voken: A Series of Powerful Visual Tokenizersβ830Updated this week
- This repo contains the code for 1D tokenizer and generatorβ691Updated last week
- Autoregressive Model Beats Diffusion: π¦ Llama for Scalable Image Generationβ1,576Updated 6 months ago
- Pytorch implementation of Transfusion, "Predict the Next Token and Diffuse Images with One Multi-Modal Model", from MetaAIβ950Updated 2 weeks ago
- π This is a repository for organizing papers, codes and other resources related to unified multimodal models.β374Updated last month
- LaVIT: Empower the Large Language Model to Understand and Generate Visual Contentβ561Updated 4 months ago
- Implementation of MagViT2 Tokenizer in Pytorchβ590Updated last month
- Infinity β : Scaling Bitwise AutoRegressive Modeling for High-Resolution Image Synthesisβ968Updated this week
- Next-Token Prediction is All You Needβ2,004Updated 3 months ago
- PyTorch implementation of MAR+DiffLoss https://arxiv.org/abs/2406.11838β1,285Updated 4 months ago
- Official Implementation of "Lumina-mGPT: Illuminate Flexible Photorealistic Text-to-Image Generation with Multimodal Generative Pretrainiβ¦β543Updated 6 months ago
- [ECCV 2024] official code for "Long-CLIP: Unlocking the Long-Text Capability of CLIP"β753Updated 6 months ago
- Official PyTorch Implementation of "SiT: Exploring Flow and Diffusion-based Generative Models with Scalable Interpolant Transformers"β764Updated 11 months ago
- Official Pytorch Implementation of Representation Alignment for Generation: Training Diffusion Transformers Is Easier Than You Think (ICLβ¦β832Updated 3 weeks ago
- [CVPR 2024] Panda-70M: Captioning 70M Videos with Multiple Cross-Modality Teachersβ572Updated 3 months ago
- The paper collections for the autoregressive models in vision.β406Updated this week
- [ICLR 2024 Spotlight] DreamLLM: Synergistic Multimodal Comprehension and Creationβ418Updated 2 months ago
- β599Updated last year
- LLaVA-UHD v2: an MLLM Integrating High-Resolution Feature Pyramid via Hierarchical Window Transformerβ366Updated last month
- NeurIPS 2024 Paper: A Unified Pixel-level Vision LLM for Understanding, Generating, Segmenting, Editingβ484Updated 4 months ago
- Official repository for the paper PLLaVAβ638Updated 6 months ago
- π₯π₯π₯ A curated list of papers on LLMs-based multimodal generation (image, video, 3D and audio).β428Updated last month
- PyTorch implementation of RCG https://arxiv.org/abs/2312.03701β902Updated 4 months ago
- Anole: An Open, Autoregressive and Native Multimodal Models for Interleaved Image-Text Generationβ721Updated 6 months ago
- [Survey] Next Token Prediction Towards Multimodal Intelligence: A Comprehensive Surveyβ357Updated last month
- HART: Efficient Visual Generation with Hybrid Autoregressive Transformerβ418Updated 4 months ago
- Official implementation of SEED-LLaMA (ICLR 2024).β596Updated 5 months ago
- The official GitHub page for the review paper "Sora: A Review on Background, Technology, Limitations, and Opportunities of Large Vision Mβ¦β495Updated 11 months ago