HanqingWangAI / VXNLinks
Repository of our accepted NeurIPS-2022 paper "Towards Versatile Embodied Navigation"
☆21Updated 2 years ago
Alternatives and similar repositories for VXN
Users that are interested in VXN are comparing it to the libraries listed below
Sorting:
- [ACM MM 2022] Target-Driven Structured Transformer Planner for Vision-Language Navigation☆15Updated 2 years ago
- Dataset and baseline for Scenario Oriented Object Navigation (SOON)☆18Updated 3 years ago
- Code and Data for our CVPR 2021 paper "Structured Scene Memory for Vision-Language Navigation"☆39Updated 3 years ago
- Official Implementation of IVLN-CE: Iterative Vision-and-Language Navigation in Continuous Environments☆34Updated last year
- Official implementation of the NRNS paper☆36Updated 3 years ago
- Code of the NeurIPS 2021 paper: Language and Visual Entity Relationship Graph for Agent Navigation☆45Updated 3 years ago
- Official implementation of Layout-aware Dreamer for Embodied Referring Expression Grounding [AAAI 23].☆17Updated 2 years ago
- Pytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation"☆79Updated last year
- Official implementation of Learning from Unlabeled 3D Environments for Vision-and-Language Navigation (ECCV'22).☆41Updated 2 years ago
- ☆18Updated 2 years ago
- Official implementation of History Aware Multimodal Transformer for Vision-and-Language Navigation (NeurIPS'21).☆123Updated 2 years ago
- Code for NeurIPS 2021 paper "Curriculum Learning for Vision-and-Language Navigation"☆15Updated 2 years ago
- Official implementation of KERM: Knowledge Enhanced Reasoning for Vision-and-Language Navigation (CVPR'23)☆43Updated 11 months ago
- [ICCV'23] Learning Vision-and-Language Navigation from YouTube Videos☆57Updated 6 months ago
- Implementation of our ICCV 2023 paper DREAMWALKER: Mental Planning for Continuous Vision-Language Navigation☆19Updated last year
- ☆40Updated 2 years ago
- ☆51Updated 3 years ago
- ☆34Updated 3 years ago
- Codebase for the Airbert paper☆45Updated 2 years ago
- Code of the CVPR 2022 paper "HOP: History-and-Order Aware Pre-training for Vision-and-Language Navigation"☆30Updated last year
- Habitat-Web is a web application to collect human demonstrations for embodied tasks on Amazon Mechanical Turk (AMT) using the Habitat sim…☆57Updated 3 years ago
- ☆33Updated last year
- Implementation (R2R part) for the paper "Iterative Vision-and-Language Navigation"☆17Updated last year
- Official Pytorch implementation for NeurIPS 2022 paper "Weakly-Supervised Multi-Granularity Map Learning for Vision-and-Language Navigati…☆33Updated 2 years ago
- ☆24Updated 3 years ago
- Code and data of the Fine-Grained R2R Dataset proposed in the EMNLP 2021 paper Sub-Instruction Aware Vision-and-Language Navigation☆48Updated 3 years ago
- ☆36Updated 4 years ago
- Code of the CVPR 2021 Oral paper: A Recurrent Vision-and-Language BERT for Navigation☆181Updated 2 years ago
- Repository of our accepted CVPR2022 paper "Counterfactual Cycle-Consistent Learning for Instruction Following and Generation in Vision-La…☆28Updated 3 years ago
- Pytorch Code and Data for EnvEdit: Environment Editing for Vision-and-Language Navigation (CVPR 2022)☆32Updated 2 years ago