tsinghua-fib-lab / World-ModelLinks
[ACM CSUR 2025] Understanding World or Predicting Future? A Comprehensive Survey of World Models
☆413Updated 2 months ago
Alternatives and similar repositories for World-Model
Users that are interested in World-Model are comparing it to the libraries listed below
Sorting:
- A comprehensive list of papers for the definition of World Models and using World Models for General Video Generation, Embodied AI, and A…☆1,134Updated last week
- A collection of paper/projects that trains flow matching model/policies via RL.☆351Updated last month
- RynnVLA-002: A Unified Vision-Language-Action and World Model☆859Updated last month
- ☆449Updated last week
- A Curated List of Awesome Works in World Modeling, Aiming to Serve as a One-stop Resource for Researchers, Practitioners, and Enthusiasts…☆1,769Updated last week
- ☆489Updated 3 months ago
- A curated list of awesome papers on Embodied AI and related research/industry-driven resources.☆502Updated 7 months ago
- StarVLA: A Lego-like Codebase for Vision-Language-Action Model Developing☆949Updated this week
- A Comprehensive Survey on World Models for Embodied AI☆205Updated 2 months ago
- ☆223Updated 3 months ago
- Paper list in the survey: A Survey on Vision-Language-Action Models: An Action Tokenization Perspective☆419Updated 6 months ago
- A paper list for spatial reasoning☆614Updated last week
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model☆336Updated 3 months ago
- Heterogeneous Pre-trained Transformer (HPT) as Scalable Policy Learner.☆523Updated last year
- Latest Advances on Embodied Multimodal LLMs (or Vison-Language-Action Models).☆121Updated last year
- SimpleVLA-RL: Scaling VLA Training via Reinforcement Learning☆1,302Updated 3 weeks ago
- [NeurIPS 2025] DreamVLA: A Vision-Language-Action Model Dreamed with Comprehensive World Knowledge☆282Updated 3 weeks ago
- 📖 This is a repository for organizing papers, codes and other resources related to Visual Reinforcement Learning.☆391Updated last week
- InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policy☆344Updated 3 weeks ago
- Official code for the CVPR 2025 paper "Navigation World Models".☆517Updated 2 months ago
- Single-file implementation to advance vision-language-action (VLA) models with reinforcement learning.☆388Updated 2 months ago
- Latest Advances on Vison-Language-Action Models.☆128Updated 10 months ago
- Spirit-v1.5: A Robotic Foundation Model by Spirit AI☆465Updated 2 weeks ago
- [NeurIPS 2025] Official implementation of "RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics"☆222Updated last month
- [RSS 2025] Learning to Act Anywhere with Task-centric Latent Actions☆968Updated 2 months ago
- OpenHelix: An Open-source Dual-System VLA Model for Robotic Manipulation☆337Updated 5 months ago
- [Actively Maintained🔥] A list of Embodied AI papers accepted by top conferences (ICLR, NeurIPS, ICML, RSS, CoRL, ICRA, IROS, CVPR, ICCV,…☆463Updated last month
- A curated list of large VLM-based VLA models for robotic manipulation.☆328Updated last month
- Awesome paper list and repos of the paper "A comprehensive survey of embodied world models".☆60Updated 3 months ago
- Dexbotic: Open-Source Vision-Language-Action Toolbox☆675Updated last week