eric-ai-lab / awesome-vision-language-navigationLinks
A curated list for vision-and-language navigation. ACL 2022 paper "Vision-and-Language Navigation: A Survey of Tasks, Methods, and Future Directions"
☆583Updated last year
Alternatives and similar repositories for awesome-vision-language-navigation
Users that are interested in awesome-vision-language-navigation are comparing it to the libraries listed below
Sorting:
- Ideas and thoughts about the fascinating Vision-and-Language Navigation☆290Updated 2 years ago
- Vision-and-Language Navigation in Continuous Environments using Habitat☆687Updated last year
- Reading list for research topics in embodied vision☆699Updated 7 months ago
- Official implementation of Think Global, Act Local: Dual-scale GraphTransformer for Vision-and-Language Navigation (CVPR'22 Oral).☆252Updated 2 years ago
- [AAAI 2024] Official implementation of NavGPT: Explicit Reasoning in Vision-and-Language Navigation with Large Language Models☆305Updated 2 years ago
- A curated list of research papers in Vision-Language Navigation (VLN)☆232Updated last year
- [CVPR 2024] The code for paper 'Towards Learning a Generalist Model for Embodied Navigation'☆227Updated last year
- ☆122Updated 2 years ago
- [ECCV 2024] Official implementation of NavGPT-2: Unleashing Navigational Reasoning Capability for Large Vision-Language Models☆233Updated last year
- [TMLR 2024] repository for VLN with foundation models☆240Updated 2 months ago
- Code of the CVPR 2021 Oral paper: A Recurrent Vision-and-Language BERT for Navigation☆201Updated 3 years ago
- Official implementation of History Aware Multimodal Transformer for Vision-and-Language Navigation (NeurIPS'21).☆141Updated 2 years ago
- Code of the paper "NavCoT: Boosting LLM-Based Vision-and-Language Navigation via Learning Disentangled Reasoning" (TPAMI 2025)☆126Updated 7 months ago
- [TPAMI 2024] Official repo of "ETPNav: Evolving Topological Planning for Vision-Language Navigation in Continuous Environments"☆408Updated 9 months ago
- The repository provides code associated with the paper VLFM: Vision-Language Frontier Maps for Zero-Shot Semantic Navigation (ICRA 2024)☆651Updated 2 months ago
- Pytorch code for NeurIPS-20 Paper "Object Goal Navigation using Goal-Oriented Semantic Exploration"☆434Updated 2 years ago
- A curated list of awesome papers on Embodied AI and related research/industry-driven resources.☆496Updated 7 months ago
- ☆259Updated last year
- [NeurIPS 2024] SG-Nav: Online 3D Scene Graph Prompting for LLM-based Zero-shot Object Navigation☆305Updated 4 months ago
- ☆189Updated 9 months ago
- [ICRA2023] Implementation of Visual Language Maps for Robot Navigation☆626Updated last year
- [RSS 2024 & RSS 2025] VLN-CE evaluation code of NaVid and Uni-NaVid☆360Updated 3 months ago
- Code and Data of the CVPR 2022 paper: Bridging the Gap Between Learning in Discrete and Continuous Environments for Vision-and-Language N…☆143Updated 2 years ago
- ZSON: Zero-Shot Object-Goal Navigation using Multimodal Goal Embeddings. NeurIPS 2022☆97Updated 2 years ago
- Leveraging Large Language Models for Visual Target Navigation☆154Updated 2 years ago
- REVERIE: Remote Embodied Visual Referring Expression in Real Indoor Environments☆147Updated 2 years ago
- ☆55Updated 3 years ago
- [CVPR 2023] CoWs on Pasture: Baselines and Benchmarks for Language-Driven Zero-Shot Object Navigation☆149Updated 2 years ago
- [CVPR 2024 & NeurIPS 2024] EmbodiedScan: A Holistic Multi-Modal 3D Perception Suite Towards Embodied AI☆650Updated 7 months ago
- Room-across-Room (RxR) is a large-scale, multilingual dataset for Vision-and-Language Navigation (VLN) in Matterport3D environments. It c…☆170Updated 2 years ago