leofan90 / Awesome-World-Models
A comprehensive list of papers for the definition of World Models and using World Models for General Video Generation, Embodied AI, and Autonomous Driving, including papers, codes, and related websites.
☆93Updated this week
Alternatives and similar repositories for Awesome-World-Models:
Users that are interested in Awesome-World-Models are comparing it to the libraries listed below
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model☆172Updated this week
- Awesome Papers about World Models in Autonomous Driving☆78Updated 11 months ago
- [RSS 2024] Learning Manipulation by Predicting Interaction☆103Updated 8 months ago
- [NeurIPS 2024] DrivingDojo Dataset: Advancing Interactive and Knowledge-Enriched Driving World Model☆66Updated 4 months ago
- Doe-1: Closed-Loop Autonomous Driving with Large World Model☆89Updated 3 months ago
- CoRL2024 | Hint-AD: Holistically Aligned Interpretability for End-to-End Autonomous Driving☆56Updated 5 months ago
- [ECCV 2024] TOD3Cap: Towards 3D Dense Captioning in Outdoor Scenes☆116Updated last month
- Latest Advances on Embodied Multimodal LLMs (or Vison-Language-Action Models).☆111Updated 9 months ago
- Unleashing the Power of VLMs in Autonomous Driving via Reinforcement Learning and Reasoning☆204Updated 3 weeks ago
- Official code of paper "DeeR-VLA: Dynamic Inference of Multimodal Large Language Models for Efficient Robot Execution"☆89Updated 2 months ago
- Code&Data for Grounded 3D-LLM with Referent Tokens☆109Updated 3 months ago
- Nexus: Decoupled Diffusion Sparks Adaptive Scene Generation☆32Updated this week
- RoboDual: Dual-System for Robotic Manipulation☆57Updated last week
- [NeurIPS 2024] CLOVER: Closed-Loop Visuomotor Control with Generative Expectation for Robotic Manipulation☆111Updated 4 months ago
- [Arxiv 2025: MoLe-VLA: Dynamic Layer-skipping Vision Language Action Model via Mixture-of-Layers for Efficient Robot Manipulation]☆26Updated 2 weeks ago
- [CVPR 2025]Lift3D Foundation Policy: Lifting 2D Large-Scale Pretrained Models for Robust 3D Robotic Manipulation☆135Updated last month
- The PyTorch implementation of paper: "AdaWorld: Learning Adaptable World Models with Latent Actions".☆57Updated 3 weeks ago
- Benchmark and model for step-by-step reasoning in autonomous driving.☆46Updated last month
- [CVPR 2025] The offical Implementation of "Universal Actions for Enhanced Embodied Foundation Models"☆130Updated last month
- [CoRL 2024] VLM-Grounder: A VLM Agent for Zero-Shot 3D Visual Grounding☆97Updated 4 months ago
- A Multi-Modal Large Language Model with Retrieval-augmented In-context Learning capacity designed for generalisable and explainable end-t…☆92Updated 6 months ago
- MetaSpatial leverages reinforcement learning to enhance 3D spatial reasoning in vision-language models (VLMs), enabling more structured, …☆104Updated 3 weeks ago
- The repo of paper `RoboMamba: Multimodal State Space Model for Efficient Robot Reasoning and Manipulation`☆112Updated 4 months ago
- 🔥 SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes.☆226Updated last month
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆109Updated 2 weeks ago
- Are VLMs Ready for Autonomous Driving? An Empirical Study from the Reliability, Data, and Metric Perspectives☆70Updated 2 months ago
- ☆58Updated 8 months ago
- ☆52Updated 2 months ago
- Code for MultiPLY: A Multisensory Object-Centric Embodied Large Language Model in 3D World☆128Updated 5 months ago
- Official code for the CVPR 2025 paper "Navigation World Models".☆50Updated 2 weeks ago