lichao-sun / SoraReview
The official GitHub page for the review paper "Sora: A Review on Background, Technology, Limitations, and Opportunities of Large Vision Models".
ā496Updated last year
Alternatives and similar repositories for SoraReview:
Users that are interested in SoraReview are comparing it to the libraries listed below
- š„š„š„ A curated list of papers on LLMs-based multimodal generation (image, video, 3D and audio).ā441Updated last week
- [CVPR2024 Highlight] VBench - We Evaluate Video Generationā839Updated this week
- [CVPR 2024] Panda-70M: Captioning 70M Videos with Multiple Cross-Modality Teachersā586Updated 5 months ago
- LaVIT: Empower the Large Language Model to Understand and Generate Visual Contentā569Updated 5 months ago
- [Survey] Next Token Prediction Towards Multimodal Intelligence: A Comprehensive Surveyā398Updated 2 months ago
- SEED-Voken: A Series of Powerful Visual Tokenizersā849Updated last month
- Official PyTorch Implementation of "SiT: Exploring Flow and Diffusion-based Generative Models with Scalable Interpolant Transformers"ā790Updated last year
- Implementation of MagViT2 Tokenizer in Pytorchā597Updated 2 months ago
- A reading list of video generationā524Updated this week
- Official repo for paper "MiraData: A Large-Scale Video Dataset with Long Durations and Structured Captions"ā415Updated 6 months ago
- [ICLR 2025] Repository for Show-o, One Single Transformer to Unify Multimodal Understanding and Generation.ā1,275Updated last week
- VisionLLaMA: A Unified LLaMA Backbone for Vision Tasksā383Updated 8 months ago
- [TMLR 2025š„] A survey for the autoregressive models in vision.ā443Updated this week
- A collection of awesome video generation studies.ā481Updated 2 weeks ago
- Infinity ā : Scaling Bitwise AutoRegressive Modeling for High-Resolution Image Synthesisā1,021Updated last month
- [ICLR 2024 Spotlight] DreamLLM: Synergistic Multimodal Comprehension and Creationā427Updated 3 months ago
- Latte: Latent Diffusion Transformer for Video Generation.ā1,800Updated 3 weeks ago
- š This is a repository for organizing papers, codes and other resources related to unified multimodal models.ā415Updated last week
- Diffusion Model-Based Image Editing: A Survey (TPAMI 2025)ā583Updated 2 weeks ago
- This repo contains the code for 1D tokenizer and generatorā769Updated this week
- My implementation of "Patch nā Pack: NaViT, a Vision Transformer for any Aspect Ratio and Resolution"ā224Updated last month
- Autoregressive Model Beats Diffusion: š¦ Llama for Scalable Image Generationā1,623Updated 7 months ago
- [ICLR'25 Oral] Representation Alignment for Generation: Training Diffusion Transformers Is Easier Than You Thinkā882Updated last week
- Pytorch implementation of Transfusion, "Predict the Next Token and Diffuse Images with One Multi-Modal Model", from MetaAIā982Updated this week
- [CVPR 2024] OneLLM: One Framework to Align All Modalities with Languageā624Updated 5 months ago
- VideoSys: An easy and efficient system for video generationā1,944Updated 2 weeks ago
- Official repo and evaluation implementation of VSI-Benchā421Updated 3 weeks ago
- Next-Token Prediction is All You Needā2,034Updated last week
- Official implementation of SEED-LLaMA (ICLR 2024).ā604Updated 6 months ago
- ā602Updated last year