LMM101 / Awesome-Multimodal-Next-Token-PredictionLinks
[Survey] Next Token Prediction Towards Multimodal Intelligence: A Comprehensive Survey
β468Updated 11 months ago
Alternatives and similar repositories for Awesome-Multimodal-Next-Token-Prediction
Users that are interested in Awesome-Multimodal-Next-Token-Prediction are comparing it to the libraries listed below
Sorting:
- Official implementation of UnifiedReward & [NeurIPS 2025] UnifiedReward-Thinkβ660Updated last week
- π This is a repository for organizing papers, codes and other resources related to unified multimodal models.β779Updated 3 months ago
- Long-RL: Scaling RL to Long Sequences (NeurIPS 2025)β684Updated 3 months ago
- [ICLR 2025] VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generationβ411Updated 8 months ago
- Video-R1: Reinforcing Video Reasoning in MLLMs [π₯the first paper to explore R1 for video]β794Updated last month
- [CVPR 2025] π₯ Official impl. of "TokenFlow: Unified Image Tokenizer for Multimodal Understanding and Generation".β419Updated 5 months ago
- Explore the Multimodal βAha Momentβ on 2B Modelβ621Updated 9 months ago
- Awesome Unified Multimodal Modelsβ1,026Updated 4 months ago
- β¨First Open-Source R1-like Video-LLM [2025/02/18]β381Updated 10 months ago
- π This is a repository for organizing papers, codes, and other resources related to unified multimodal models.β342Updated last week
- β1,069Updated last month
- Efficient Multimodal Large Language Models: A Surveyβ382Updated 8 months ago
- MM-Eureka V0 also called R1-Multimodal-Journey, Latest version is in MM-Eurekaβ322Updated 6 months ago
- EVE Series: Encoder-Free Vision-Language Models from BAAIβ363Updated 5 months ago
- [COLM 2025] Open-Qwen2VL: Compute-Efficient Pre-Training of Fully-Open Multimodal LLMs on Academic Resourcesβ300Updated 4 months ago
- [NeurIPS 2025 Spotlight] A Unified Tokenizer for Visual Generation and Understandingβ496Updated 2 months ago
- This is the first paper to explore how to effectively use R1-like RL for MLLMs and introduce Vision-R1, a reasoning MLLM that leverages β¦β748Updated 4 months ago
- β304Updated 3 weeks ago
- π₯π₯π₯ A curated list of papers on LLMs-based multimodal generation (image, video, 3D and audio).β537Updated 9 months ago
- [Preprint] On the Generalization of SFT: A Reinforcement Learning Perspective with Reward Rectification.β523Updated last week
- MM-EUREKA: Exploring the Frontiers of Multimodal Reasoning with Rule-based Reinforcement Learningβ766Updated 4 months ago
- The Next Step Forward in Multimodal LLM Alignmentβ193Updated 8 months ago
- [Neurips'24 Spotlight] Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought β¦β416Updated last year
- [ICLR 2024 Spotlight] DreamLLM: Synergistic Multimodal Comprehension and Creationβ458Updated last year
- [TMLR 2025π₯] A survey for the autoregressive models in vision.β777Updated 2 months ago
- Code for MetaMorph Multimodal Understanding and Generation via Instruction Tuningβ231Updated 8 months ago
- R1-onevision, a visual language model capable of deep CoT reasoning.β573Updated 9 months ago
- LaVIT: Empower the Large Language Model to Understand and Generate Visual Contentβ601Updated last year
- MMR1: Enhancing Multimodal Reasoning with Variance-Aware Sampling and Open Resourcesβ212Updated 3 months ago
- [ICLR 2025 Spotlight] OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Textβ410Updated 8 months ago