PatrickHua / Awesome-World-Models
This repository is a collection of research papers on World Models.
☆39Updated last year
Alternatives and similar repositories for Awesome-World-Models
Users that are interested in Awesome-World-Models are comparing it to the libraries listed below
Sorting:
- A paper list of world model☆27Updated last month
- ☆44Updated 2 years ago
- ☆76Updated 8 months ago
- Evaluate Multimodal LLMs as Embodied Agents☆46Updated 2 months ago
- [ICLR 2025] Official implementation and benchmark evaluation repository of <PhysBench: Benchmarking and Enhancing Vision-Language Models …☆57Updated 2 months ago
- LogiCity@NeurIPS'24, D&B track. A multi-agent inductive learning environment for "abstractions".☆22Updated 6 months ago
- EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Video Generation☆103Updated 5 months ago
- Codebase for HiP☆89Updated last year
- [RSS 2025] Learning to Act Anywhere with Task-centric Latent Actions☆71Updated this week
- [ICLR 2023] SQA3D for embodied scene understanding and reasoning☆131Updated last year
- [CVPR 2024] Official repository for "Tactile-Augmented Radiance Fields".☆59Updated 3 months ago
- ☆71Updated 8 months ago
- Code release for "Pre-training Contextualized World Models with In-the-wild Videos for Reinforcement Learning" (NeurIPS 2023), https://ar…☆62Updated 7 months ago
- Official PyTorch implementation of Doduo: Dense Visual Correspondence from Unsupervised Semantic-Aware Flow☆44Updated last year
- [ICLR 2025 Spotlight] Grounding Video Models to Actions through Goal Conditioned Exploration☆48Updated last week
- ☆42Updated last year
- Code for MultiPLY: A Multisensory Object-Centric Embodied Large Language Model in 3D World☆128Updated 6 months ago
- Code for Stable Control Representations☆24Updated last month
- [CVPR'24 Highlight] The official code and data for paper "EgoThink: Evaluating First-Person Perspective Thinking Capability of Vision-Lan…☆58Updated last month
- Code release for paper "Autonomous Improvement of Instruction Following Skills via Foundation Models" | CoRL 2024☆71Updated 4 months ago
- Unifying 2D and 3D Vision-Language Understanding☆79Updated last month
- ☆13Updated last year
- ☆45Updated last year
- ☆16Updated last year
- A comprehensive list of papers for the definition of World Models and using World Models for General Video Generation, Embodied AI, and A…☆110Updated last week
- [CVPR 2023] Code for "3D Concept Learning and Reasoning from Multi-View Images"☆78Updated last year
- Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning☆61Updated last week
- Official implementation of: Bootstrapping Language-Guided Navigation Learning with Self-Refining Data Flywheel☆24Updated 5 months ago
- [ICRA 2024] Dream2Real: Zero-Shot 3D Object Rearrangement with Vision-Language Models☆68Updated last year
- [ICRA 2025] In-Context Imitation Learning via Next-Token Prediction☆71Updated last month