facebookresearch / nwmLinks
Official code for the CVPR 2025 paper "Navigation World Models".
☆297Updated last week
Alternatives and similar repositories for nwm
Users that are interested in nwm are comparing it to the libraries listed below
Sorting:
- [ICLR 2025] SPA: 3D Spatial-Awareness Enables Effective Embodied Representation☆162Updated 3 weeks ago
- Official implementation of the paper: "StreamVLN: Streaming Vision-and-Language Navigation via SlowFast Context Modeling"☆76Updated last week
- [CVPR'25] SeeGround: See and Ground for Zero-Shot Open-Vocabulary 3D Visual Grounding☆152Updated 2 months ago
- [CoRL 2024] VLM-Grounder: A VLM Agent for Zero-Shot 3D Visual Grounding☆108Updated last month
- [CVPR 2025]Lift3D Foundation Policy: Lifting 2D Large-Scale Pretrained Models for Robust 3D Robotic Manipulation☆149Updated 3 weeks ago
- WorldVLA: Towards Autoregressive Action World Model☆248Updated last week
- [ECCV 2024] ManiGaussian: Dynamic Gaussian Splatting for Multi-task Robotic Manipulation☆232Updated 3 months ago
- [ICLR 2025 Spotlight] MetaUrban: An Embodied AI Simulation Platform for Urban Micromobility☆188Updated last week
- [CVPR2025] CityWalker: Learning Embodied Urban Navigation from Web-Scale Videos☆118Updated 3 months ago
- ☆67Updated 6 months ago
- [ICCV 2025] A Simple yet Effective Pathway to Empowering LLaVA to Understand and Interact with 3D World☆281Updated this week
- [RSS'25] This repository is the implementation of "NaVILA: Legged Robot Vision-Language-Action Model for Navigation"☆135Updated last week
- Official implementation of Spatial-MLLM: Boosting MLLM Capabilities in Visual-based Spatial Intelligence☆285Updated 3 weeks ago
- SceneFun3D ToolKit☆147Updated 2 months ago
- SoFar: Language-Grounded Orientation Bridges Spatial Reasoning and Object Manipulation☆175Updated 2 weeks ago
- [CVPR 2025] The code for paper ''Video-3D LLM: Learning Position-Aware Video Representation for 3D Scene Understanding''.☆133Updated last month
- [RSS 2025] Novel Demonstration Generation with Gaussian Splatting Enables Robust One-Shot Manipulation☆119Updated last month
- [CVPR 2025] Source codes for the paper "3D-Mem: 3D Scene Memory for Embodied Exploration and Reasoning"☆149Updated last month
- Unified Vision-Language-Action Model☆128Updated last week
- Repository relating to "TrackVLA: Embodied Visual Tracking in the Wild"☆120Updated last week
- EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Video Generation☆109Updated 8 months ago
- ☆63Updated last month
- [CVPR 2024] GenNBV: Generalizable Next-Best-View Policy for Active 3D Reconstruction☆67Updated last month
- ☆96Updated last week
- Code&Data for Grounded 3D-LLM with Referent Tokens☆123Updated 6 months ago
- Official PyTorch Implementation of Unified Video Action Model (RSS 2025)☆232Updated 2 weeks ago
- VLM-3R: Vision-Language Models Augmented with Instruction-Aligned 3D Reconstruction☆214Updated 2 weeks ago
- Official implementation of "RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics"☆96Updated last week
- [NeurIPS'24] This repository is the implementation of "SpatialRGPT: Grounded Spatial Reasoning in Vision Language Models"☆215Updated 7 months ago
- A comprehensive list of papers for the definition of World Models and using World Models for General Video Generation, Embodied AI, and A…☆207Updated this week