wzcai99 / Awesom-Embodied-Navigation
Paper & Project lists of cutting-edge research on visual navigation and embodied AI.
☆26Updated last year
Alternatives and similar repositories for Awesom-Embodied-Navigation:
Users that are interested in Awesom-Embodied-Navigation are comparing it to the libraries listed below
- Official code release for "Navigation with Large Language Models: Semantic Guesswork as a Heuristic for Planning"☆46Updated last year
- Official GitHub Repository for Paper "Bridging Zero-shot Object Navigation and Foundation Models through Pixel-Guided Navigation Skill", …☆88Updated 4 months ago
- Commonsense Scene Graph-based Target Localization for Object Search☆12Updated 11 months ago
- ☆29Updated 2 years ago
- End-to-End Navigation with VLMs☆55Updated last month
- [RA-L 2025] Dynamic Open-Vocabulary 3D Scene Graphs for Long-term Language-Guided Mobile Manipulation☆47Updated 2 months ago
- ☆76Updated 8 months ago
- Language-Grounded Dynamic Scene Graphs for Interactive Object Search with Mobile Manipulation. Project website: http://moma-llm.cs.uni-fr…☆70Updated 7 months ago
- A curated list of awesome Vision-and-Language Navigation(VLN) resources (continually updated)☆61Updated this week
- ☆49Updated 2 months ago
- Official implementation of OpenFMNav: Towards Open-Set Zero-Shot Object Navigation via Vision-Language Foundation Models☆34Updated 5 months ago
- Open Vocabulary Object Navigation☆59Updated 3 weeks ago
- ☆35Updated 2 months ago
- [CVPR 2023] We propose a framework for the challenging 3D-aware ObjectNav based on two straightforward sub-policies. The two sub-polices,…☆66Updated 9 months ago
- Grounding Large Language Models for Dynamic Planning to Navigation in New Environments☆26Updated 8 months ago
- [Submitted to ICRA2025]COHERENT: Collaboration of Heterogeneous Multi-Robot System with Large Language Models☆33Updated 3 weeks ago
- ☆110Updated 4 months ago
- [ISER 2023] The official implementation of Audio Visual Language Maps for Robot Navigation☆33Updated 10 months ago
- Code for LGX (Language Guided Exploration). We use LLMs to perform embodied robot navigation in a zero-shot manner.☆59Updated last year
- Vision-Language Navigation Benchmark in Isaac Lab☆104Updated 2 months ago
- official implementation for ECCV 2024 paper "Prioritized Semantic Learning for Zero-shot Instance Navigation"☆29Updated 5 months ago
- PyTorch implementation of paper: GaussNav: Gaussian Splatting for Visual Navigation☆95Updated 4 months ago
- PoliFormer: Scaling On-Policy RL with Transformers Results in Masterful Navigators☆66Updated 3 months ago
- Towards Long-Horizon Vision-Language Navigation: Platform, Benchmark and Method (CVPR-25)☆28Updated this week
- Official implementation of GridMM: Grid Memory Map for Vision-and-Language Navigation (ICCV'23).☆80Updated 10 months ago
- [CoRL2023] Open-Vocabulary Scene-Graph☆64Updated last year
- CMU Vision-Language-Autonomy Challenge - Unity Setup☆60Updated last month
- Last-Mile Embodied Visual Navigation https://jbwasse2.github.io/portfolio/SLING/☆27Updated 2 years ago
- RoomTour3D - Geometry-aware, cheap and automatic data from web videos for embodied navigation☆19Updated 2 months ago
- Leveraging Large Language Models for Visual Target Navigation☆108Updated last year