QinengWang-Aiden / Awesome-embodied-world-model-papers
A paper list that includes world models or generative video models for embodied agents.
☆22Updated 3 months ago
Alternatives and similar repositories for Awesome-embodied-world-model-papers:
Users that are interested in Awesome-embodied-world-model-papers are comparing it to the libraries listed below
- EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Video Generation☆103Updated 5 months ago
- FleVRS: Towards Flexible Visual Relationship Segmentation, NeurIPS 2024☆20Updated 4 months ago
- ☆27Updated 3 weeks ago
- [ICLR 2025] Official implementation and benchmark evaluation repository of <PhysBench: Benchmarking and Enhancing Vision-Language Models …☆50Updated last month
- [CVPR 2023] Code for "3D Concept Learning and Reasoning from Multi-View Images"☆78Updated last year
- ☆14Updated 3 weeks ago
- Code for paper "Grounding Video Models to Actions through Goal Conditioned Exploration" (ICLR 2025 Spotlight).☆44Updated 3 months ago
- Code for paper "Super-CLEVR: A Virtual Benchmark to Diagnose Domain Robustness in Visual Reasoning"☆34Updated last year
- [CVPR 2024] Situational Awareness Matters in 3D Vision Language Reasoning☆37Updated 4 months ago
- [ICLR 2024] Official implementation of the paper "Toss: High-quality text-guided novel view synthesis from a single image"☆22Updated 11 months ago
- Official implementation for WorldScore: A Unified Evaluation Benchmark for World Generation☆92Updated last week
- [NeurIPS 2024] Official code repository for MSR3D paper☆50Updated this week
- Code for "Chat-3D: Data-efficiently Tuning Large Language Model for Universal Dialogue of 3D Scenes"☆54Updated last year
- HaWoR: World-Space Hand Motion Reconstruction from Egocentric Videos☆49Updated 3 weeks ago
- Diffusion Powers Video Tokenizer for Comprehension and Generation (CVPR 2025)☆66Updated last month
- Training-free Guidance in Text-to-Video Generation via Multimodal Planning and Structured Noise Initialization☆15Updated last week
- [ECCV 2024] M3DBench introduces a comprehensive 3D instruction-following dataset with support for interleaved multi-modal prompts.☆60Updated 6 months ago
- Spatial-R1: The first MLLM trained using GRPO for spatial reasoning in videos☆25Updated last week
- [arXiv 2024] The official repository of the paper "Unsupervised Discovery of Object-Centric Neural Fields"☆17Updated 2 months ago
- https://coshand.cs.columbia.edu/☆16Updated 6 months ago
- ☆17Updated 10 months ago
- [CVPR 2025] 3D-GRAND: Towards Better Grounding and Less Hallucination for 3D-LLMs☆37Updated 10 months ago
- Repo for "Human-Centric Foundation Models: Perception, Generation and Agentic Modeling" (https://arxiv.org/abs/2502.08556)☆39Updated 2 months ago
- A comprehensive list of papers investigating physical cognition in video generation, including papers, codes, and related websites.☆71Updated this week
- Code for the paper "GenHowTo: Learning to Generate Actions and State Transformations from Instructional Videos" published at CVPR 2024☆51Updated last year
- ☆12Updated 3 weeks ago
- Agent-to-Sim Learning Interactive Behavior from Casual Videos.☆42Updated 6 months ago
- Code release for "PISA Experiments: Exploring Physics Post-Training for Video Diffusion Models by Watching Stuff Drop" (arXiv 2025)☆28Updated last month
- [CVPR 2025] Uni4D: Unifying Visual Foundation Models for 4D Modeling from a Single Video☆67Updated last week
- The official implementation of The paper "Exploring the Potential of Encoder-free Architectures in 3D LMMs"☆51Updated 2 weeks ago