leofan90 / Awesome-World-ModelsLinks
A comprehensive list of papers for the definition of World Models and using World Models for General Video Generation, Embodied AI, and Autonomous Driving, including papers, codes, and related websites.
☆252Updated this week
Alternatives and similar repositories for Awesome-World-Models
Users that are interested in Awesome-World-Models are comparing it to the libraries listed below
Sorting:
- ☆410Updated last year
- WorldVLA: Towards Autoregressive Action World Model☆299Updated 3 weeks ago
- Unified Vision-Language-Action Model☆158Updated last week
- DreamVLA: A Vision-Language-Action Model Dreamed with Comprehensive World Knowledge☆120Updated last week
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model☆261Updated last month
- [ICLR 2025] LAPA: Latent Action Pretraining from Videos☆340Updated 6 months ago
- LLaVA-VLA: A Simple Yet Powerful Vision-Language-Action Model [Actively Maintained🔥]☆115Updated last week
- Official code for the CVPR 2025 paper "Navigation World Models".☆327Updated 3 weeks ago
- [RSS 2025] Learning to Act Anywhere with Task-centric Latent Actions☆612Updated this week
- Latest Advances on Embodied Multimodal LLMs (or Vison-Language-Action Models).☆116Updated last year
- ☆268Updated 3 months ago
- Latest Advances on Vison-Language-Action Models.☆86Updated 4 months ago
- Paper list in the survey: A Survey on Vision-Language-Action Models: An Action Tokenization Perspective☆147Updated 3 weeks ago
- [ICML'25] The PyTorch implementation of paper: "AdaWorld: Learning Adaptable World Models with Latent Actions".☆131Updated last month
- Official code of paper "DeeR-VLA: Dynamic Inference of Multimodal Large Language Models for Efficient Robot Execution"☆100Updated 5 months ago
- ☆377Updated 6 months ago
- Official repo of VLABench, a large scale benchmark designed for fairly evaluating VLA, Embodied Agent, and VLMs.☆266Updated last month
- [ICCV 2025] RoboFactory: Exploring Embodied Agent Collaboration with Compositional Constraints☆65Updated this week
- [CVPR 2025] The offical Implementation of "Universal Actions for Enhanced Embodied Foundation Models"☆185Updated 4 months ago
- The official repo for "SpatialBot: Precise Spatial Understanding with Vision Language Models.☆284Updated 2 months ago
- Online RL with Simple Reward Enables Training VLA Models with Only One Trajectory☆305Updated last month
- [ICML 2025 Oral] Official repo of EmbodiedBench, a comprehensive benchmark designed to evaluate MLLMs as embodied agents.☆160Updated 2 weeks ago
- [ICML 2024] Official code repository for 3D embodied generalist agent LEO☆446Updated 3 months ago
- 🔥 SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.☆406Updated last month
- OpenHelix: An Open-source Dual-System VLA Model for Robotic Manipulation☆228Updated last month
- Heterogeneous Pre-trained Transformer (HPT) as Scalable Policy Learner.☆504Updated 7 months ago
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"☆269Updated last year
- The repo of paper `RoboMamba: Multimodal State Space Model for Efficient Robot Reasoning and Manipulation`☆129Updated 7 months ago
- Official implementation of "RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics"☆106Updated this week
- [NeurIPS 2024] CLOVER: Closed-Loop Visuomotor Control with Generative Expectation for Robotic Manipulation☆120Updated 3 weeks ago