PatrickHua / Awesome-World-ModelsLinks
This repository is a collection of research papers on World Models.
☆39Updated 2 years ago
Alternatives and similar repositories for Awesome-World-Models
Users that are interested in Awesome-World-Models are comparing it to the libraries listed below
Sorting:
- A paper list of world model☆29Updated 6 months ago
- ☆76Updated 4 months ago
- Visual Embodied Brain: Let Multimodal Large Language Models See, Think, and Control in Spaces☆81Updated 4 months ago
- ☆44Updated 3 years ago
- [ICLR 2025] Official implementation and benchmark evaluation repository of <PhysBench: Benchmarking and Enhancing Vision-Language Models …☆72Updated 4 months ago
- ☆45Updated last year
- [NeurIPS 2025 Spotlight] SimWorld: An Open-ended Realistic Simulator for Autonomous Agents in Physical and Social Worlds☆65Updated last week
- LogiCity@NeurIPS'24, D&B track. A multi-agent inductive learning environment for "abstractions".☆25Updated 4 months ago
- Codebase for HiP☆90Updated last year
- Code for "Interactive Task Planning with Language Models"☆32Updated 5 months ago
- [CVPR 2023] Code for "3D Concept Learning and Reasoning from Multi-View Images"☆80Updated last year
- Code for MultiPLY: A Multisensory Object-Centric Embodied Large Language Model in 3D World☆133Updated 11 months ago
- Slot-TTA shows that test-time adaptation using slot-centric models can improve image segmentation on out-of-distribution examples.☆26Updated 2 years ago
- ☆33Updated 2 years ago
- Code release for "Pre-training Contextualized World Models with In-the-wild Videos for Reinforcement Learning" (NeurIPS 2023), https://ar…☆67Updated last year
- Code and data for "Does Spatial Cognition Emerge in Frontier Models?"☆26Updated 5 months ago
- Code for Stable Control Representations☆25Updated 6 months ago
- VLA-RFT: Vision-Language-Action Models with Reinforcement Fine-Tuning☆45Updated this week
- Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning☆74Updated 4 months ago
- Evaluate Multimodal LLMs as Embodied Agents☆54Updated 7 months ago
- Official repository for "RLVR-World: Training World Models with Reinforcement Learning" (NeurIPS 2025), https://arxiv.org/abs/2505.13934☆102Updated 2 weeks ago
- IMProv: Inpainting-based Multimodal Prompting for Computer Vision Tasks☆58Updated last year
- [CVPR'24 Highlight] The official code and data for paper "EgoThink: Evaluating First-Person Perspective Thinking Capability of Vision-Lan…☆61Updated 6 months ago
- [ICLR 2023] SQA3D for embodied scene understanding and reasoning☆148Updated last year
- ☆80Updated last year
- Virtual Community: An Open World for Humans, Robots, and Society☆177Updated last week
- [CVPR 2025] 3D-GRAND: Towards Better Grounding and Less Hallucination for 3D-LLMs☆46Updated last year
- [IJCV] EgoPlan-Bench: Benchmarking Multimodal Large Language Models for Human-Level Planning☆73Updated 10 months ago
- InternVLA-A1: Unifying Understanding, Generation, and Action for Robotic Manipulation☆43Updated 3 weeks ago
- Latest Advances on Embodied Multimodal LLMs (or Vison-Language-Action Models).☆121Updated last year