PatrickHua / Awesome-World-Models
This repository is a collection of research papers on World Models.
☆37Updated last year
Alternatives and similar repositories for Awesome-World-Models:
Users that are interested in Awesome-World-Models are comparing it to the libraries listed below
- A paper list of world model☆25Updated 10 months ago
- ☆43Updated 2 years ago
- ☆75Updated 7 months ago
- Code for paper "Grounding Video Models to Actions through Goal Conditioned Exploration".☆44Updated 2 months ago
- Code release for "Pre-training Contextualized World Models with In-the-wild Videos for Reinforcement Learning" (NeurIPS 2023), https://ar…☆58Updated 5 months ago
- [ICLR 2025] Official implementation and benchmark evaluation repository of <PhysBench: Benchmarking and Enhancing Vision-Language Models …☆44Updated 2 weeks ago
- ☆42Updated 11 months ago
- Codebase for HiP☆88Updated last year
- EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Video Generation☆96Updated 4 months ago
- Evaluate Multimodal LLMs as Embodied Agents☆38Updated last month
- Official repo of EmbodiedBench, a comprehensive benchmark designed to evaluate MLLMs as embodied agents.☆88Updated this week
- ☆94Updated 7 months ago
- ☆21Updated 2 months ago
- ☆67Updated 6 months ago
- Code for "Interactive Task Planning with Language Models"☆27Updated last year
- official implementation for our paper Steering Your Generalists: Improving Robotic Foundation Models via Value Guidance (CoRL 2024)☆20Updated last month
- Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning☆54Updated last month
- A vast array of Multi-Modal Embodied Robotic Foundation Models!☆27Updated last year
- Code for MultiPLY: A Multisensory Object-Centric Embodied Large Language Model in 3D World☆127Updated 5 months ago
- Code release for paper "Autonomous Improvement of Instruction Following Skills via Foundation Models" | CoRL 2024☆68Updated 2 months ago
- ☆11Updated 8 months ago
- LogiCity@NeurIPS'24, D&B track. A multi-agent inductive learning environment for "abstractions".☆21Updated 4 months ago
- ☆33Updated last year
- ☆62Updated 5 months ago
- [ICLR 2023] SQA3D for embodied scene understanding and reasoning☆127Updated last year
- ☆12Updated last year
- Source codes for the paper "COMBO: Compositional World Models for Embodied Multi-Agent Cooperation"☆28Updated last week
- [CVPR'24 Highlight] The official code and data for paper "EgoThink: Evaluating First-Person Perspective Thinking Capability of Vision-Lan…☆58Updated this week
- Code for Stable Control Representations☆24Updated 2 months ago
- code for the paper Predicting Point Tracks from Internet Videos enables Diverse Zero-Shot Manipulation☆80Updated 7 months ago