facebookresearch / nwmLinks
Official code for the CVPR 2025 paper "Navigation World Models".
☆247Updated 2 months ago
Alternatives and similar repositories for nwm
Users that are interested in nwm are comparing it to the libraries listed below
Sorting:
- [CVPR2025] CityWalker: Learning Embodied Urban Navigation from Web-Scale Videos☆102Updated 2 months ago
- [CoRL 2024] VLM-Grounder: A VLM Agent for Zero-Shot 3D Visual Grounding☆107Updated last month
- [ICLR 2025] SPA: 3D Spatial-Awareness Enables Effective Embodied Representation☆157Updated last week
- [ECCV 2024] ManiGaussian: Dynamic Gaussian Splatting for Multi-task Robotic Manipulation☆228Updated 3 months ago
- [CVPR'25] SeeGround: See and Ground for Zero-Shot Open-Vocabulary 3D Visual Grounding☆131Updated 2 months ago
- Official PyTorch Implementation of Unified Video Action Model (RSS 2025)☆215Updated 3 months ago
- ☆64Updated 5 months ago
- [ICLR 2025 Spotlight] MetaUrban: An Embodied AI Simulation Platform for Urban Micromobility☆182Updated last month
- EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Video Generation☆108Updated 7 months ago
- [NeurIPS'24] This repository is the implementation of "SpatialRGPT: Grounded Spatial Reasoning in Vision Language Models"☆211Updated 6 months ago
- ☆62Updated 3 weeks ago
- [CVPR 2025] UniGoal: Towards Universal Zero-shot Goal-oriented Navigation☆162Updated last month
- ☆100Updated 11 months ago
- [CVPR 2025] Source codes for the paper "3D-Mem: 3D Scene Memory for Embodied Exploration and Reasoning"☆130Updated 2 weeks ago
- [NeurIPS 2024 D&B] Point Cloud Matters: Rethinking the Impact of Different Observation Spaces on Robot Learning☆80Updated 8 months ago
- ☆110Updated 8 months ago
- A comprehensive list of papers for the definition of World Models and using World Models for General Video Generation, Embodied AI, and A…☆171Updated this week
- Single-file implementation to advance vision-language-action (VLA) models with reinforcement learning.☆131Updated last month
- ☆12Updated last month
- ☆97Updated 3 weeks ago
- [RSS'25] This repository is the implementation of "NaVILA: Legged Robot Vision-Language-Action Model for Navigation"☆64Updated last week
- [TMLR 2024] repository for VLN with foundation models☆131Updated 3 months ago
- [CVPR 2023] CoWs on Pasture: Baselines and Benchmarks for Language-Driven Zero-Shot Object Navigation☆136Updated last year
- 🔥 SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.☆360Updated this week
- [CVPR 2025]Lift3D Foundation Policy: Lifting 2D Large-Scale Pretrained Models for Robust 3D Robotic Manipulation☆146Updated this week
- ☆145Updated 2 months ago
- ☆60Updated 3 months ago
- [RSS 2024] Learning Manipulation by Predicting Interaction☆109Updated 10 months ago
- [NeurIPS 2024] SG-Nav: Online 3D Scene Graph Prompting for LLM-based Zero-shot Object Navigation☆213Updated 3 months ago
- SceneFun3D ToolKit☆142Updated 2 months ago