PatrickHua / Awesome-World-ModelsLinks
This repository is a collection of research papers on World Models.
☆43Updated 2 years ago
Alternatives and similar repositories for Awesome-World-Models
Users that are interested in Awesome-World-Models are comparing it to the libraries listed below
Sorting:
- A paper list of world model☆28Updated 9 months ago
- LogiCity@NeurIPS'24, D&B track. A multi-agent inductive learning environment for "abstractions".☆27Updated 7 months ago
- ☆78Updated 7 months ago
- ☆44Updated 3 years ago
- Code for "Interactive Task Planning with Language Models"☆33Updated last week
- ☆46Updated last year
- Code release for "Pre-training Contextualized World Models with In-the-wild Videos for Reinforcement Learning" (NeurIPS 2023), https://ar…☆69Updated last year
- Codebase for HiP☆90Updated 2 years ago
- Code for Stable Control Representations☆26Updated 9 months ago
- Code and data for "Does Spatial Cognition Emerge in Frontier Models?"☆26Updated 9 months ago
- ☆91Updated last year
- [ICLR 2025] Official implementation and benchmark evaluation repository of <PhysBench: Benchmarking and Enhancing Vision-Language Models …☆83Updated 7 months ago
- Visual Embodied Brain: Let Multimodal Large Language Models See, Think, and Control in Spaces☆87Updated 7 months ago
- ☆33Updated last year
- [CVPR 2023] Code for "3D Concept Learning and Reasoning from Multi-View Images"☆84Updated 2 years ago
- Code for MultiPLY: A Multisensory Object-Centric Embodied Large Language Model in 3D World☆134Updated last year
- IMProv: Inpainting-based Multimodal Prompting for Computer Vision Tasks☆58Updated last year
- ☆34Updated 2 years ago
- Official implementation of "Self-Improving Video Generation"☆76Updated 8 months ago
- Official implementation of paper "ROCKET-1: Mastering Open-World Interaction with Visual-Temporal Context Prompting" (CVPR'25)☆46Updated 9 months ago
- Code, data and weights for the paper **What drives success in physical planning with Joint-Embedding Predictive World Models?**☆106Updated 2 weeks ago
- [CVPR 2025] 3D-GRAND: Towards Better Grounding and Less Hallucination for 3D-LLMs☆52Updated last year
- Dream-VL and Dream-VLA, a diffusion VLM and a diffusion VLA.☆86Updated last week
- Slot-TTA shows that test-time adaptation using slot-centric models can improve image segmentation on out-of-distribution examples.☆26Updated 2 years ago
- Official repository for "RLVR-World: Training World Models with Reinforcement Learning" (NeurIPS 2025), https://arxiv.org/abs/2505.13934☆188Updated 2 months ago
- Spatial Aptitude Training for Multimodal Langauge Models☆23Updated 2 weeks ago
- Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning☆79Updated 8 months ago
- Evaluate Multimodal LLMs as Embodied Agents☆57Updated 11 months ago
- [CVPR'24 Highlight] The official code and data for paper "EgoThink: Evaluating First-Person Perspective Thinking Capability of Vision-Lan…☆63Updated 9 months ago
- HD-EPIC Python script to download the entire datasets or parts of it☆15Updated 3 months ago