leofan90 / Awesome-World-ModelsLinks
A comprehensive list of papers for the definition of World Models and using World Models for General Video Generation, Embodied AI, and Autonomous Driving, including papers, codes, and related websites.
☆198Updated last week
Alternatives and similar repositories for Awesome-World-Models
Users that are interested in Awesome-World-Models are comparing it to the libraries listed below
Sorting:
- ☆402Updated last year
- WorldVLA: Towards Autoregressive Action World Model☆206Updated last week
- Official code for the CVPR 2025 paper "Navigation World Models".☆282Updated 2 months ago
- [NeurIPS 2024] CLOVER: Closed-Loop Visuomotor Control with Generative Expectation for Robotic Manipulation☆118Updated last week
- [CVPR 2025] The offical Implementation of "Universal Actions for Enhanced Embodied Foundation Models"☆181Updated 3 months ago
- [RSS 2024] Learning Manipulation by Predicting Interaction☆110Updated last week
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model☆249Updated 3 weeks ago
- [ICLR 2025] LAPA: Latent Action Pretraining from Videos☆323Updated 5 months ago
- Latest Advances on Vison-Language-Action Models.☆81Updated 4 months ago
- RoboDual: Dual-System for Robotic Manipulation☆82Updated last week
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆131Updated 3 months ago
- ☆234Updated 2 months ago
- Awesome Papers about World Models in Autonomous Driving☆81Updated last year
- Official code of paper "DeeR-VLA: Dynamic Inference of Multimodal Large Language Models for Efficient Robot Execution"☆98Updated 4 months ago
- [ICML'25] The PyTorch implementation of paper: "AdaWorld: Learning Adaptable World Models with Latent Actions".☆125Updated 3 weeks ago
- LLaVA-VLA: A Simple Yet Powerful Vision-Language-Action Model [Actively Maintained🔥]☆82Updated this week
- [RSS 2025] Learning to Act Anywhere with Task-centric Latent Actions☆529Updated last week
- SoFar: Language-Grounded Orientation Bridges Spatial Reasoning and Object Manipulation☆173Updated last week
- ☆371Updated 5 months ago
- Official PyTorch Implementation of Unified Video Action Model (RSS 2025)☆223Updated last week
- Latest Advances on Embodied Multimodal LLMs (or Vison-Language-Action Models).☆115Updated last year
- OpenVLA: An open-source vision-language-action model for robotic manipulation.☆220Updated 3 months ago
- [CVPR 2025]Lift3D Foundation Policy: Lifting 2D Large-Scale Pretrained Models for Robust 3D Robotic Manipulation☆147Updated 2 weeks ago
- [ICLR 2025 Oral] Seer: Predictive Inverse Dynamics Models are Scalable Learners for Robotic Manipulation☆199Updated 3 weeks ago
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"☆262Updated last year
- Official repo of VLABench, a large scale benchmark designed for fairly evaluating VLA, Embodied Agent, and VLMs.☆243Updated last week
- Online RL with Simple Reward Enables Training VLA Models with Only One Trajectory☆259Updated 2 weeks ago
- A Foundational Vision-Language-Action Model for Synergizing Cognition and Action in Robotic Manipulation☆293Updated last month
- [ICLR 2025 Spotlight] MetaUrban: An Embodied AI Simulation Platform for Urban Micromobility☆187Updated last month
- [Actively Maintained🔥] A list of Embodied AI papers accepted by top conferences (ICLR, NeurIPS, ICML, RSS, CoRL, ICRA, IROS, CVPR, ICCV,…☆323Updated last month