GuochenZhou / World-Model
A paper list of world model
☆27Updated last month
Alternatives and similar repositories for World-Model
Users that are interested in World-Model are comparing it to the libraries listed below
Sorting:
- This repository is a collection of research papers on World Models.☆39Updated last year
- [RSS 2025] Learning to Act Anywhere with Task-centric Latent Actions☆71Updated this week
- Implementation of Language-Conditioned Path Planning (Amber Xie, Youngwoon Lee, Pieter Abbeel, Stephen James)☆23Updated last year
- ☆31Updated 2 weeks ago
- ☆33Updated last year
- Code for Stable Control Representations☆24Updated last month
- IMProv: Inpainting-based Multimodal Prompting for Computer Vision Tasks☆58Updated 7 months ago
- Official Implementation of Learning Navigational Visual Representations with Semantic Map Supervision (ICCV2023)☆25Updated last year
- Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning☆61Updated last week
- (CVPR 2025) A Data-Centric Revisit of Pre-Trained Vision Models for Robot Learning☆13Updated 2 months ago
- Slot-TTA shows that test-time adaptation using slot-centric models can improve image segmentation on out-of-distribution examples.☆26Updated last year
- ☆71Updated 8 months ago
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"☆44Updated last year
- Code release for "Pre-training Contextualized World Models with In-the-wild Videos for Reinforcement Learning" (NeurIPS 2023), https://ar…☆62Updated 7 months ago
- ☆76Updated 8 months ago
- Planning as In-Painting: A Diffusion-Based Embodied Task Planning Framework for Environments under Uncertainty☆20Updated last year
- Repo for Bring Your Own Vision-Language-Action (VLA) model, arxiv 2024☆27Updated 3 months ago
- ☆44Updated 2 years ago
- [ICLR 2025 Spotlight] Grounding Video Models to Actions through Goal Conditioned Exploration☆48Updated last week
- Evaluating pre-trained navigation agents under corruptions☆28Updated 3 years ago
- ☆48Updated last year
- [NeurIPS 2022] code for "Visual Concepts Tokenization"☆21Updated 2 years ago
- Evaluate Multimodal LLMs as Embodied Agents☆46Updated 2 months ago
- Implementation of Latent Diffusion Planning (Amber Xie, Oleh Rybkin, Dorsa Sadigh, Chelsea Finn)☆25Updated 2 weeks ago
- [ICLR 2025] Official implementation and benchmark evaluation repository of <PhysBench: Benchmarking and Enhancing Vision-Language Models …☆57Updated 2 months ago
- [CVPR'24 Highlight] The official code and data for paper "EgoThink: Evaluating First-Person Perspective Thinking Capability of Vision-Lan…☆58Updated last month
- ☆42Updated last year
- LogiCity@NeurIPS'24, D&B track. A multi-agent inductive learning environment for "abstractions".☆22Updated 6 months ago
- A vast array of Multi-Modal Embodied Robotic Foundation Models!☆27Updated last year
- Learning to Identify Critical States for Reinforcement Learning from Videos (Accepted to ICCV'23)☆26Updated last year