PatrickHua / Awesome-World-ModelsLinks
This repository is a collection of research papers on World Models.
☆41Updated 2 years ago
Alternatives and similar repositories for Awesome-World-Models
Users that are interested in Awesome-World-Models are comparing it to the libraries listed below
Sorting:
- A paper list of world model☆28Updated 7 months ago
- ☆44Updated 3 years ago
- ☆77Updated 5 months ago
- Code for MultiPLY: A Multisensory Object-Centric Embodied Large Language Model in 3D World☆134Updated last year
- ☆46Updated last year
- SimWorld: An Open-ended Realistic Simulator for Autonomous Agents in Physical and Social Worlds☆74Updated last week
- [ICLR 2025] Official implementation and benchmark evaluation repository of <PhysBench: Benchmarking and Enhancing Vision-Language Models …☆75Updated 5 months ago
- [CVPR 2023] Code for "3D Concept Learning and Reasoning from Multi-View Images"☆82Updated last year
- Code for Stable Control Representations☆26Updated 7 months ago
- Visual Embodied Brain: Let Multimodal Large Language Models See, Think, and Control in Spaces☆85Updated 5 months ago
- Code for "Interactive Task Planning with Language Models"☆32Updated 6 months ago
- LogiCity@NeurIPS'24, D&B track. A multi-agent inductive learning environment for "abstractions".☆26Updated 5 months ago
- [CVPR'24 Highlight] The official code and data for paper "EgoThink: Evaluating First-Person Perspective Thinking Capability of Vision-Lan…☆62Updated 7 months ago
- Codebase for HiP☆90Updated last year
- Code release for "Pre-training Contextualized World Models with In-the-wild Videos for Reinforcement Learning" (NeurIPS 2023), https://ar…☆67Updated last year
- ☆33Updated 2 years ago
- [ICLR 2023] SQA3D for embodied scene understanding and reasoning☆152Updated 2 years ago
- ☆86Updated last year
- Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning☆76Updated 6 months ago
- Evaluate Multimodal LLMs as Embodied Agents☆54Updated 9 months ago
- VLA-RFT: Vision-Language-Action Models with Reinforcement Fine-Tuning☆81Updated last month
- IMProv: Inpainting-based Multimodal Prompting for Computer Vision Tasks☆57Updated last year
- A paper list that includes world models or generative video models for embodied agents.☆25Updated 10 months ago
- CLEVR3D Dataset: Comprehensive Visual Question Answering on Point Clouds through Compositional Scene Manipulation☆19Updated last year
- [CVPR 2025] 3D-GRAND: Towards Better Grounding and Less Hallucination for 3D-LLMs☆51Updated last year
- Official repository for "RLVR-World: Training World Models with Reinforcement Learning" (NeurIPS 2025), https://arxiv.org/abs/2505.13934☆135Updated 3 weeks ago
- ☆18Updated last year
- [CVPR 2025 Highlight] Towards Autonomous Micromobility through Scalable Urban Simulation☆134Updated last week
- Latest Advances on Embodied Multimodal LLMs (or Vison-Language-Action Models).☆121Updated last year
- ☆54Updated last year