InternRobotics / InternNavLinks
InternRobotics' open platform for building generalized navigation foundation models.
☆502Updated last week
Alternatives and similar repositories for InternNav
Users that are interested in InternNav are comparing it to the libraries listed below
Sorting:
- Official implementation of the paper: "StreamVLN: Streaming Vision-and-Language Navigation via SlowFast Context Modeling"☆345Updated last month
- [RSS'25] This repository is the implementation of "NaVILA: Legged Robot Vision-Language-Action Model for Navigation"☆438Updated 4 months ago
- [RSS 2024 & RSS 2025] VLN-CE evaluation code of NaVid and Uni-NaVid☆340Updated 2 months ago
- [RSS 2025] Uni-NaVid: A Video-based Vision-Language-Action Model for Unifying Embodied Navigation Tasks.☆194Updated last week
- [CVPR 2025] UniGoal: Towards Universal Zero-shot Goal-oriented Navigation☆285Updated 3 months ago
- [TMLR 2024] repository for VLN with foundation models☆228Updated 2 months ago
- ☆184Updated 8 months ago
- Official implementation of the paper: "NavDP: Learning Sim-to-Real Navigation Diffusion Policy with Privileged Information Guidance"☆312Updated last week
- Vision-Language Navigation Benchmark in Isaac Lab☆283Updated 3 months ago
- [NeurIPS 2024] SG-Nav: Online 3D Scene Graph Prompting for LLM-based Zero-shot Object Navigation☆300Updated 3 months ago
- [ECCV 2024] Official implementation of NavGPT-2: Unleashing Navigational Reasoning Capability for Large Vision-Language Models☆230Updated last year
- [CoRL 2025] Repository relating to "TrackVLA: Embodied Visual Tracking in the Wild"☆303Updated last month
- Low-level locomotion policy training in Isaac Lab☆375Updated 9 months ago
- [ICRA 2025] Official implementation of Open-Nav: Exploring Zero-Shot Vision-and-Language Navigation in Continuous Environment with Open-S…☆101Updated 6 months ago
- ☆222Updated 4 months ago
- A curated list of awesome Vision-and-Language Navigation(VLN) resources (continually updated)☆108Updated 9 months ago
- Code for OctoNav-R1☆62Updated 6 months ago
- The repository provides code associated with the paper VLFM: Vision-Language Frontier Maps for Zero-Shot Semantic Navigation (ICRA 2024)☆640Updated last month
- [IROS'25 Oral] WMNav: Integrating Vision-Language Models into World Models for Object Goal Navigation☆133Updated 2 months ago
- A curated list of large VLM-based VLA models for robotic manipulation.☆288Updated this week
- End-to-End Navigation with VLMs☆109Updated 8 months ago
- 这个文档是使用Habitat-sim的中文教程☆69Updated 2 years ago
- Awesome Embodied Navigation: Concept, Paradigm and State-of-the-arts☆162Updated last year
- [Actively Maintained🔥] A list of Embodied AI papers accepted by top conferences (ICLR, NeurIPS, ICML, RSS, CoRL, ICRA, IROS, CVPR, ICCV,…☆441Updated 3 weeks ago
- [CVPR 2024] The code for paper 'Towards Learning a Generalist Model for Embodied Navigation'☆57Updated last year
- Official GitHub Repository for Paper "Bridging Zero-shot Object Navigation and Foundation Models through Pixel-Guided Navigation Skill", …☆127Updated last year
- The offical Implementation of "Soft-Prompted Transformer as Scalable Cross-Embodiment Vision-Language-Action Model"☆378Updated 3 weeks ago
- 🔥 SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.☆600Updated 6 months ago
- Code of the paper "NavCoT: Boosting LLM-Based Vision-and-Language Navigation via Learning Disentangled Reasoning" (TPAMI 2025)☆124Updated 6 months ago
- ☆116Updated last year