leofan90 / Awesome-World-Models
A comprehensive list of papers for the definition of World Models and using World Models for General Video Generation, Embodied AI, and Autonomous Driving, including papers, codes, and related websites.
☆98Updated this week
Alternatives and similar repositories for Awesome-World-Models:
Users that are interested in Awesome-World-Models are comparing it to the libraries listed below
- [NeurIPS 2024] DrivingDojo Dataset: Advancing Interactive and Knowledge-Enriched Driving World Model☆66Updated 4 months ago
- [RSS 2024] Learning Manipulation by Predicting Interaction☆103Updated 8 months ago
- Awesome Papers about World Models in Autonomous Driving☆78Updated 11 months ago
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model☆176Updated this week
- CoRL2024 | Hint-AD: Holistically Aligned Interpretability for End-to-End Autonomous Driving☆56Updated 5 months ago
- Doe-1: Closed-Loop Autonomous Driving with Large World Model☆89Updated 3 months ago
- Official code of paper "DeeR-VLA: Dynamic Inference of Multimodal Large Language Models for Efficient Robot Execution"☆90Updated 2 months ago
- RoboDual: Dual-System for Robotic Manipulation☆57Updated last week
- Unleashing the Power of VLMs in Autonomous Driving via Reinforcement Learning and Reasoning☆207Updated last month
- [CVPR 2025]Lift3D Foundation Policy: Lifting 2D Large-Scale Pretrained Models for Robust 3D Robotic Manipulation☆137Updated last month
- Latest Advances on Embodied Multimodal LLMs (or Vison-Language-Action Models).☆111Updated 9 months ago
- Nexus: Decoupled Diffusion Sparks Adaptive Scene Generation☆32Updated this week
- [CVPR 2025] The offical Implementation of "Universal Actions for Enhanced Embodied Foundation Models"☆133Updated last month
- [NeurIPS 2024] CLOVER: Closed-Loop Visuomotor Control with Generative Expectation for Robotic Manipulation☆111Updated 4 months ago
- Are VLMs Ready for Autonomous Driving? An Empirical Study from the Reliability, Data, and Metric Perspectives☆70Updated 2 months ago
- The PyTorch implementation of paper: "AdaWorld: Learning Adaptable World Models with Latent Actions".☆57Updated last month
- [ECCV 2024] TOD3Cap: Towards 3D Dense Captioning in Outdoor Scenes☆116Updated last month
- 🔥 SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes.☆237Updated last month
- ☆52Updated 2 months ago
- Benchmark and model for step-by-step reasoning in autonomous driving.☆46Updated last month
- Code&Data for Grounded 3D-LLM with Referent Tokens☆110Updated 3 months ago
- Latest Advances on Vison-Language-Action Models.☆38Updated last month
- [Arxiv 2025: MoLe-VLA: Dynamic Layer-skipping Vision Language Action Model via Mixture-of-Layers for Efficient Robot Manipulation]☆26Updated 2 weeks ago
- ☆59Updated 8 months ago
- A Multi-Modal Large Language Model with Retrieval-augmented In-context Learning capacity designed for generalisable and explainable end-t…☆92Updated 6 months ago
- ☆377Updated 11 months ago
- The repo of paper `RoboMamba: Multimodal State Space Model for Efficient Robot Reasoning and Manipulation`☆112Updated 4 months ago
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆110Updated 2 weeks ago
- [CoRL 2024] VLM-Grounder: A VLM Agent for Zero-Shot 3D Visual Grounding☆97Updated 4 months ago
- [ECCV 2024] Embodied Understanding of Driving Scenarios☆189Updated 3 months ago