leofan90 / Awesome-World-Models
A comprehensive list of papers for the definition of World Models and using World Models for General Video Generation, Embodied AI, and Autonomous Driving, including papers, codes, and related websites.
☆105Updated last week
Alternatives and similar repositories for Awesome-World-Models:
Users that are interested in Awesome-World-Models are comparing it to the libraries listed below
- [RSS 2024] Learning Manipulation by Predicting Interaction☆106Updated 8 months ago
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model☆188Updated last week
- CoRL2024 | Hint-AD: Holistically Aligned Interpretability for End-to-End Autonomous Driving☆56Updated 6 months ago
- [NeurIPS 2024] DrivingDojo Dataset: Advancing Interactive and Knowledge-Enriched Driving World Model☆67Updated 5 months ago
- Awesome Papers about World Models in Autonomous Driving☆78Updated last year
- Unleashing the Power of VLMs in Autonomous Driving via Reinforcement Learning and Reasoning☆210Updated last month
- Latest Advances on Vison-Language-Action Models.☆43Updated 2 months ago
- A comprehensive list of excellent research papers, models, datasets, and other resources on Vision-Language-Action (VLA) models in roboti…☆89Updated this week
- Doe-1: Closed-Loop Autonomous Driving with Large World Model☆89Updated 3 months ago
- ☆56Updated last week
- [NeurIPS 2024] CLOVER: Closed-Loop Visuomotor Control with Generative Expectation for Robotic Manipulation☆113Updated 5 months ago
- [Arxiv 2025: MoLe-VLA: Dynamic Layer-skipping Vision Language Action Model via Mixture-of-Layers for Efficient Robot Manipulation]☆28Updated last month
- Official code of paper "DeeR-VLA: Dynamic Inference of Multimodal Large Language Models for Efficient Robot Execution"☆91Updated 2 months ago
- MetaSpatial leverages reinforcement learning to enhance 3D spatial reasoning in vision-language models (VLMs), enabling more structured, …☆112Updated last week
- RoboDual: Dual-System for Robotic Manipulation☆71Updated last week
- ☆102Updated 3 weeks ago
- The repo of paper `RoboMamba: Multimodal State Space Model for Efficient Robot Reasoning and Manipulation`☆114Updated 4 months ago
- [CVPR 2025]Lift3D Foundation Policy: Lifting 2D Large-Scale Pretrained Models for Robust 3D Robotic Manipulation☆138Updated 2 months ago
- ☆380Updated last year
- The Official Implementation of RoboMatrix☆90Updated 4 months ago
- A Multi-Modal Large Language Model with Retrieval-augmented In-context Learning capacity designed for generalisable and explainable end-t…☆94Updated 7 months ago
- [CVPR 2025] The offical Implementation of "Universal Actions for Enhanced Embodied Foundation Models"☆145Updated last month
- Are VLMs Ready for Autonomous Driving? An Empirical Study from the Reliability, Data, and Metric Perspectives☆73Updated 2 months ago
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆113Updated last month
- Benchmark and model for step-by-step reasoning in autonomous driving.☆48Updated last month
- ☆52Updated 2 months ago
- Latest Advances on Embodied Multimodal LLMs (or Vison-Language-Action Models).☆112Updated 10 months ago
- RoboFactory: Exploring Embodied Agent Collaboration with Compositional Constraints☆43Updated 3 weeks ago
- ☆75Updated last week
- Official PyTorch Implementation of Unified Video Action Model (RSS 2025)☆180Updated last month