leofan90 / Awesome-World-ModelsLinks
A comprehensive list of papers for the definition of World Models and using World Models for General Video Generation, Embodied AI, and Autonomous Driving, including papers, codes, and related websites.
☆843Updated this week
Alternatives and similar repositories for Awesome-World-Models
Users that are interested in Awesome-World-Models are comparing it to the libraries listed below
Sorting:
- ☆467Updated last month
- A Curated List of Awesome Works in World Modeling, Aiming to Serve as a One-stop Resource for Researchers, Practitioners, and Enthusiasts…☆1,354Updated this week
- [ACM CSUR 2025] Understanding World or Predicting Future? A Comprehensive Survey of World Models☆237Updated 2 weeks ago
- RynnVLA-002: A Unified Vision-Language-Action and World Model☆679Updated 2 weeks ago
- Heterogeneous Pre-trained Transformer (HPT) as Scalable Policy Learner.☆518Updated last year
- ☆374Updated last month
- [ICLR 2025] LAPA: Latent Action Pretraining from Videos☆414Updated 10 months ago
- SimpleVLA-RL: Scaling VLA Training via Reinforcement Learning☆1,077Updated last month
- A collection of paper/projects that trains flow matching model/policies via RL.☆310Updated last week
- Paper list in the survey: A Survey on Vision-Language-Action Models: An Action Tokenization Perspective☆341Updated 5 months ago
- Official code for the CVPR 2025 paper "Navigation World Models".☆455Updated 2 weeks ago
- [RSS 2025] Learning to Act Anywhere with Task-centric Latent Actions☆864Updated 2 weeks ago
- A curated list of awesome papers on Embodied AI and related research/industry-driven resources.☆486Updated 6 months ago
- Official repo and evaluation implementation of VSI-Bench☆645Updated 4 months ago
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model☆322Updated 2 months ago
- StarVLA: A Lego-like Codebase for Vision-Language-Action Model Developing☆513Updated 2 weeks ago
- 📖 This is a repository for organizing papers, codes and other resources related to Visual Reinforcement Learning.☆347Updated last week
- Official repo of VLABench, a large scale benchmark designed for fairly evaluating VLA, Embodied Agent, and VLMs.☆336Updated 3 weeks ago
- 🔥 SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.☆578Updated 5 months ago
- [ICML 2024] 3D-VLA: A 3D Vision-Language-Action Generative World Model☆600Updated last year
- [ICML'25] The PyTorch implementation of paper: "AdaWorld: Learning Adaptable World Models with Latent Actions".☆180Updated 5 months ago
- ☆411Updated last week
- A paper list for spatial reasoning☆471Updated last week
- Unified Vision-Language-Action Model☆245Updated last month
- A Foundational Vision-Language-Action Model for Synergizing Cognition and Action in Robotic Manipulation☆383Updated last month
- Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Success☆869Updated 2 months ago
- OpenHelix: An Open-source Dual-System VLA Model for Robotic Manipulation☆328Updated 3 months ago
- Official repository for "iVideoGPT: Interactive VideoGPTs are Scalable World Models" (NeurIPS 2024), https://arxiv.org/abs/2405.15223☆158Updated 2 months ago
- Latest Advances on Vison-Language-Action Models.☆119Updated 9 months ago
- Official PyTorch Implementation of Unified Video Action Model (RSS 2025)☆298Updated 4 months ago