PatrickHua / Awesome-World-Models
This repository is a collection of research papers on World Models.
☆37Updated last year
Alternatives and similar repositories for Awesome-World-Models:
Users that are interested in Awesome-World-Models are comparing it to the libraries listed below
- A paper list of world model☆25Updated 9 months ago
- ☆73Updated 5 months ago
- ☆43Updated 2 years ago
- Code for paper "Grounding Video Models to Actions through Goal Conditioned Exploration".☆41Updated last month
- ☆42Updated 9 months ago
- Code for Stable Control Representations☆23Updated last month
- ☆61Updated 5 months ago
- [ICCV 2023] Understanding 3D Object Interaction from a Single Image☆41Updated 11 months ago
- IMProv: Inpainting-based Multimodal Prompting for Computer Vision Tasks☆59Updated 4 months ago
- EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Video Generation☆90Updated 3 months ago
- CLEVR3D Dataset: Comprehensive Visual Question Answering on Point Clouds through Compositional Scene Manipulation☆15Updated last year
- Code release for paper "Autonomous Improvement of Instruction Following Skills via Foundation Models" | CoRL 2024☆62Updated last month
- ☆18Updated last month
- Code release for "Pre-training Contextualized World Models with In-the-wild Videos for Reinforcement Learning" (NeurIPS 2023), https://ar…☆58Updated 4 months ago
- 📱👉🏠 Perform conditional procedural generation to generate houses like your own!☆34Updated last year
- [CVPR 2023] Code for "3D Concept Learning and Reasoning from Multi-View Images"☆77Updated last year
- Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning☆42Updated 3 weeks ago
- [ICLR 2023] SQA3D for embodied scene understanding and reasoning☆124Updated last year
- HandsOnVLM: Vision-Language Models for Hand-Object Interaction Prediction☆24Updated last month
- Code for MultiPLY: A Multisensory Object-Centric Embodied Large Language Model in 3D World☆124Updated 3 months ago
- Reading list for research topics in intuitive physics for artificial cognition.☆18Updated 2 years ago
- Repo for Bring Your Own Vision-Language-Action (VLA) model, arxiv 2024☆27Updated 3 weeks ago
- Official Implementation of 3D-GRAND: Towards Better Grounding and Less Hallucination for 3D-LLMs☆36Updated 8 months ago
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆76Updated 2 weeks ago
- [CVPR 2024] Official repository for "Tactile-Augmented Radiance Fields".☆55Updated last week
- Codebase for HiP☆88Updated last year
- Code for "Chat-3D: Data-efficiently Tuning Large Language Model for Universal Dialogue of 3D Scenes"☆50Updated 10 months ago