AnjieCheng / NaVILAView external linksLinks
[RSS'25] This repository is the implementation of "NaVILA: Legged Robot Vision-Language-Action Model for Navigation"
☆513Aug 20, 2025Updated 5 months ago
Alternatives and similar repositories for NaVILA
Users that are interested in NaVILA are comparing it to the libraries listed below
Sorting:
- Low-level locomotion policy training in Isaac Lab☆402Mar 7, 2025Updated 11 months ago
- InternRobotics' open platform for building generalized navigation foundation models.☆681Updated this week
- Vision-Language Navigation Benchmark in Isaac Lab☆292Aug 28, 2025Updated 5 months ago
- [ICRA 2026] Official implementation of the paper: "StreamVLN: Streaming Vision-and-Language Navigation via SlowFast Context Modeling"☆399Nov 2, 2025Updated 3 months ago
- Official implementation of the paper: "NavDP: Learning Sim-to-Real Navigation Diffusion Policy with Privileged Information Guidance"☆529Jan 12, 2026Updated last month
- [RSS 2024 & RSS 2025] VLN-CE evaluation code of NaVid and Uni-NaVid☆372Oct 15, 2025Updated 3 months ago
- [RSS 2025] Uni-NaVid: A Video-based Vision-Language-Action Model for Unifying Embodied Navigation Tasks.☆230Dec 15, 2025Updated last month
- Vision-and-Language Navigation in Continuous Environments using Habitat☆722Jan 7, 2025Updated last year
- The repository provides code associated with the paper VLFM: Vision-Language Frontier Maps for Zero-Shot Semantic Navigation (ICRA 2024)☆677Nov 12, 2025Updated 3 months ago
- A Robust Tightly-Coupled RGBD-Inertial and Legged Odometry Fusion SLAM for Dynamic Legged Robotics☆116Dec 10, 2025Updated 2 months ago
- ☆234Aug 6, 2025Updated 6 months ago
- [CoRL 2025] Repository relating to "TrackVLA: Embodied Visual Tracking in the Wild"