eric-ai-lab / awesome-vision-language-navigationView external linksLinks
A curated list for vision-and-language navigation. ACL 2022 paper "Vision-and-Language Navigation: A Survey of Tasks, Methods, and Future Directions"
☆591May 2, 2024Updated last year
Alternatives and similar repositories for awesome-vision-language-navigation
Users that are interested in awesome-vision-language-navigation are comparing it to the libraries listed below
Sorting:
- Official implementation of Think Global, Act Local: Dual-scale GraphTransformer for Vision-and-Language Navigation (CVPR'22 Oral).☆255Jun 27, 2023Updated 2 years ago
- A curated list of research papers in Vision-Language Navigation (VLN)☆235Apr 17, 2024Updated last year
- Reading list for research topics in embodied vision☆702Jun 13, 2025Updated 8 months ago
- Code of the CVPR 2021 Oral paper: A Recurrent Vision-and-Language BERT for Navigation☆201Aug 13, 2022Updated 3 years ago
- Ideas and thoughts about the fascinating Vision-and-Language Navigation☆293Jun 28, 2023Updated 2 years ago
- [AAAI 2024] Official implementation of NavGPT: Explicit Reasoning in Vision-and-Language Navigation with Large Language Models☆314Nov 7, 2023Updated 2 years ago
- Official implementation of History Aware Multimodal Transformer for Vision-and-Language Navigation (NeurIPS'21).☆143Jun 14, 2023Updated 2 years ago
- [CVPR 2024] The code for paper 'Towards Learning a Generalist Model for Embodied Navigation'☆228Jun 18, 2024Updated last year
- Vision-and-Language Navigation in Continuous Environments using Habitat☆722Jan 7, 2025Updated last year
- AI Research Platform for Reinforcement Learning from Real Panoramic Images.☆675Jul 12, 2024Updated last year
- [TPAMI 2024] Official repo of "ETPNav: Evolving Topological Planning for Vision-Language Navigation in Continuous Environments"☆416Apr 5, 2025Updated 10 months ago
- A curated list of awesome Vision-and-Language Navigation(VLN) resources (continually updated)☆111Mar 9, 2025Updated 11 months ago
- [ICCV 2023} Official repo of "BEVBert: Multimodal Map Pre-training for Language-guided Navigation"☆247Oct 31, 2023Updated 2 years ago
- The repository provides code associated with the paper VLFM: Vision-Language Frontier Maps for Zero-Shot Semantic Navigation (ICRA 2024)☆677Nov 12, 2025Updated 3 months ago
- [ICRA2023] Implementation of Visual Language Maps for Robot Navigation☆646Jul 9, 2024Updated last year
- Official code and checkpoint release for mobile robot foundation models: GNM, ViNT, and NoMaD.☆1,138Sep 15, 2024Updated last year
- [ECCV 2024] Official implementation of NavGPT-2: Unleashing Navigational Reasoning Capability for Large Vision-Language Models☆238Sep 20, 2024Updated last year
- [TMLR 2024] repository for VLN with foundation models☆247Oct 25, 2025Updated 3 months ago
- Code and Data of the CVPR 2022 paper: Bridging the Gap Between Learning in Discrete and Continuous Environments for Vision-and-Language N…☆144Oct 31, 2023Updated 2 years ago
- ☆194Mar 29, 2025Updated 10 months ago
- Mobile manipulation research tools for roboticists☆1,186Jun 8, 2024Updated last year
- Code of the paper "NavCoT: Boosting LLM-Based Vision-and-Language Navigation via Learning Disentangled Reasoning" (TPAMI 2025)☆130Jun 4, 2025Updated 8 months ago
- Official implementation of Learning from Unlabeled 3D Environments for Vision-and-Language Navigation (ECCV'22).☆43Mar 16, 2023Updated 2 years ago
- Pytorch code for NeurIPS-20 Paper "Object Goal Navigation using Goal-Oriented Semantic Exploration"☆438Jul 20, 2023Updated 2 years ago
- ☆55Apr 1, 2022Updated 3 years ago
- ☆263Jan 14, 2025Updated last year
- Pytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation"☆88Jun 27, 2024Updated last year
- Pytorch Code and Data for EnvEdit: Environment Editing for Vision-and-Language Navigation (CVPR 2022)☆30Aug 2, 2022Updated 3 years ago
- Official repository of ICLR 2022 paper FILM: Following Instructions in Language with Modular Methods☆127Apr 9, 2023Updated 2 years ago
- Repository for DialFRED.☆46Sep 14, 2023Updated 2 years ago
- Know What and Know Where: An Object-and-Room Informed Sequential BERT for Indoor Vision-Language Navigation☆16Feb 7, 2022Updated 4 years ago
- This is a curated list of "Embodied AI or robot with Large Language Models" research. Watch this repository for the latest updates! 🔥☆1,702Updated this week
- Codebase of ACL 2023 Findings "Aerial Vision-and-Dialog Navigation"☆61Nov 4, 2024Updated last year
- Official repository for LeLaN training and inference code☆131Sep 27, 2024Updated last year
- Official Implementation of IVLN-CE: Iterative Vision-and-Language Navigation in Continuous Environments☆35Dec 16, 2023Updated 2 years ago
- ☆608Mar 25, 2023Updated 2 years ago
- [ICCV 2023 Oral]: Scaling Data Generation in Vision-and-Language Navigation☆211Jul 2, 2025Updated 7 months ago
- Official implementation of "Grounded Entity-Landmark Adaptive Pre-training for Vision-and-Language Navigation" (ICCV 2023 Oral)☆20Oct 21, 2023Updated 2 years ago
- [NeurIPS 2024] SG-Nav: Online 3D Scene Graph Prompting for LLM-based Zero-shot Object Navigation☆316Sep 16, 2025Updated 4 months ago