GT-RIPL / robo-vln
Pytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation"
☆68Updated 4 months ago
Related projects ⓘ
Alternatives and complementary repositories for robo-vln
- Code for sim-to-real transfer of a pretrained Vision-and-Language Navigation (VLN) agent to a robot using ROS.☆36Updated 4 years ago
- ☆42Updated 2 years ago
- Code for reproducing the results of NeurIPS 2020 paper "MultiON: Benchmarking Semantic Map Memory using Multi-Object Navigation”☆46Updated 3 years ago
- The repository of ECCV 2020 paper `Active Visual Information Gathering for Vision-Language Navigation`☆44Updated 2 years ago
- Code and Data for our CVPR 2021 paper "Structured Scene Memory for Vision-Language Navigation"☆36Updated 3 years ago
- 🔀 Visual Room Rearrangement☆105Updated last year
- Code and data of the Fine-Grained R2R Dataset proposed in the EMNLP 2021 paper Sub-Instruction Aware Vision-and-Language Navigation☆43Updated 3 years ago
- Code of the NeurIPS 2021 paper: Language and Visual Entity Relationship Graph for Agent Navigation☆45Updated 3 years ago
- Code of the CVPR 2021 Oral paper: A Recurrent Vision-and-Language BERT for Navigation☆153Updated 2 years ago
- Habitat-Web is a web application to collect human demonstrations for embodied tasks on Amazon Mechanical Turk (AMT) using the Habitat sim…☆50Updated 2 years ago
- Implementation (R2R part) for the paper "Iterative Vision-and-Language Navigation"☆13Updated 7 months ago
- Codebase for the Airbert paper☆42Updated last year
- ☆75Updated 2 years ago
- Code and Data of the CVPR 2022 paper: Bridging the Gap Between Learning in Discrete and Continuous Environments for Vision-and-Language N…☆92Updated last year
- Pushing it out of the Way: Interactive Visual Navigation☆34Updated 9 months ago
- code for TIDEE: Novel Room Reorganization using Visuo-Semantic Common Sense Priors☆37Updated last year
- Official implementation of the NRNS paper☆35Updated 2 years ago
- Official implementation of Learning from Unlabeled 3D Environments for Vision-and-Language Navigation (ECCV'22).☆34Updated last year
- Implementation of our ICCV 2023 paper DREAMWALKER: Mental Planning for Continuous Vision-Language Navigation☆19Updated last year
- Repository of our accepted NeurIPS-2022 paper "Towards Versatile Embodied Navigation"☆20Updated last year
- Pytorch Code and Data for EnvEdit: Environment Editing for Vision-and-Language Navigation (CVPR 2022)☆32Updated 2 years ago
- Official implementation of History Aware Multimodal Transformer for Vision-and-Language Navigation (NeurIPS'21).☆99Updated last year
- Official implementation of KERM: Knowledge Enhanced Reasoning for Vision-and-Language Navigation (CVPR'23)☆35Updated 3 months ago
- Dataset and baseline for Scenario Oriented Object Navigation (SOON)☆17Updated 2 years ago
- Code for training embodied agents using imitation learning at scale in Habitat-Lab☆34Updated 2 years ago
- Code of the CVPR 2022 paper "HOP: History-and-Order Aware Pre-training for Vision-and-Language Navigation"☆29Updated last year
- [ICCV'23] Learning Vision-and-Language Navigation from YouTube Videos☆41Updated last year
- ☆32Updated 3 years ago
- Resources for Auxiliary Tasks and Exploration Enable ObjectNav☆39Updated 3 years ago
- ☆26Updated last year