IranQin / Awesome_World_Model_PapersLinks
[World-Model-Survey-2024] Paper list and projects for World Model
☆15Updated 10 months ago
Alternatives and similar repositories for Awesome_World_Model_Papers
Users that are interested in Awesome_World_Model_Papers are comparing it to the libraries listed below
Sorting:
- Official implementation of "Self-Improving Video Generation"☆72Updated 4 months ago
- Official repository for "iVideoGPT: Interactive VideoGPTs are Scalable World Models" (NeurIPS 2024), https://arxiv.org/abs/2405.15223☆145Updated 3 months ago
- [CVPR2024] This is the official implement of MP5☆103Updated last year
- ☆89Updated last month
- OST-Bench: Evaluating the Capabilities of MLLMs in Online Spatio-temporal Scene Understanding☆59Updated last month
- [ICCV2025 Oral] Latent Motion Token as the Bridging Language for Learning Robot Manipulation from Videos☆133Updated 4 months ago
- A paper list for spatial reasoning☆139Updated 3 months ago
- [ICML'25] The PyTorch implementation of paper: "AdaWorld: Learning Adaptable World Models with Latent Actions".☆153Updated 3 months ago
- [ICML2025] The code and data of Paper: Towards World Simulator: Crafting Physical Commonsense-Based Benchmark for Video Generation☆117Updated 10 months ago
- MetaSpatial leverages reinforcement learning to enhance 3D spatial reasoning in vision-language models (VLMs), enabling more structured, …☆189Updated 4 months ago
- [NeurIPS 2025] Official Repo of Omni-R1: Reinforcement Learning for Omnimodal Reasoning via Two-System Collaboration☆81Updated 3 months ago
- ☆88Updated 2 months ago
- Video-Holmes: Can MLLM Think Like Holmes for Complex Video Reasoning?☆73Updated 2 months ago
- [NeurIPS-2024] The offical Implementation of "Instruction-Guided Visual Masking"☆38Updated 10 months ago
- SpaceR: The first MLLM empowered by SG-RLVR for video spatial reasoning☆79Updated 2 months ago
- [ICML 2024] A Touch, Vision, and Language Dataset for Multimodal Alignment☆84Updated 3 months ago
- [ICLR2025] Official code implementation of Video-UTR: Unhackable Temporal Rewarding for Scalable Video MLLMs☆59Updated 6 months ago
- The official code of "Thinking With Videos: Multimodal Tool-Augmented Reinforcement Learning for Long Video Reasoning"☆33Updated 3 weeks ago
- ☆30Updated 9 months ago
- Unified Vision-Language-Action Model☆193Updated 2 months ago
- [ICML 2025 Oral] Official repo of EmbodiedBench, a comprehensive benchmark designed to evaluate MLLMs as embodied agents.☆185Updated 2 months ago
- EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Video Generation☆116Updated last month
- [IROS'25 Oral & NeurIPSw'24] Official implementation of "MineDreamer: Learning to Follow Instructions via Chain-of-Imagination for Simula…☆95Updated 3 months ago
- ☆84Updated last month
- A list of works on video generation towards world model☆165Updated last month
- Official code for MotionBench (CVPR 2025)☆56Updated 6 months ago
- Official implementation of "RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics"☆149Updated last month
- [ICCV 2025] RoboFactory: Exploring Embodied Agent Collaboration with Compositional Constraints☆80Updated 2 weeks ago
- Uni-CoT: Towards Unified Chain-of-Thought Reasoning Across Text and Vision☆134Updated this week
- ACTIVE-O3: Empowering Multimodal Large Language Models with Active Perception via GRPO☆72Updated 3 months ago