xiaobaishu0097 / ICLR_VTNetLinks
☆38Updated 4 years ago
Alternatives and similar repositories for ICLR_VTNet
Users that are interested in ICLR_VTNet are comparing it to the libraries listed below
Sorting:
- ☆38Updated 3 years ago
- Python implementation of the paper Learning hierarchical relationships for object-goal navigation☆48Updated 2 years ago
- ☆40Updated 2 years ago
- Visual Navigation with Spatial Attention☆38Updated 10 months ago
- Zero Experience Required: Plug & Play Modular Transfer Learning for Semantic Visual Navigation. CVPR 2022☆35Updated 3 years ago
- Code for reproducing the results of NeurIPS 2020 paper "MultiON: Benchmarking Semantic Map Memory using Multi-Object Navigation”☆55Updated 4 years ago
- Resources for Auxiliary Tasks and Exploration Enable ObjectNav☆40Updated 4 years ago
- Hierarchical Object-to-Zone Graph for Object Navigation (ICCV 2021)☆50Updated 3 years ago
- Official GitHub Repository for paper "Visual Graph Memory with Unsupervised Representation for Visual Navigation", ICCV 2021☆65Updated last year
- Unbiased Directed Object Attention Graph for Object Navigation☆15Updated 2 years ago
- Dual Adaptive Thinking (DAT) for object navigation☆13Updated 3 years ago
- Code for training embodied agents using IL and RL finetuning at scale for ObjectNav☆83Updated 7 months ago
- Paper and summaries about state-of-the-art robot Target-driven Navigation task☆48Updated 3 years ago
- Pytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation"☆88Updated last year
- Official implementation of the NRNS paper☆36Updated 3 years ago
- Target-driven Visual Navigation in Indoor Scenes using Deep Reinforcement Learning implemented in PyTorch☆70Updated 5 years ago
- ☆16Updated last year
- Reinforcement Learning-based Visual Navigation with Information-Theoretic Regularization☆33Updated 4 years ago
- PONI: Potential Functions for ObjectGoal Navigation with Interaction-free Learning. CVPR 2022 (Oral).☆110Updated 2 years ago
- ☆15Updated 5 months ago
- Code for LGX (Language Guided Exploration). We use LLMs to perform embodied robot navigation in a zero-shot manner.☆66Updated 2 years ago
- ☆53Updated 3 years ago
- ☆24Updated last year
- Habitat-Web is a web application to collect human demonstrations for embodied tasks on Amazon Mechanical Turk (AMT) using the Habitat sim…☆59Updated 3 years ago
- [ICRA 2021] SSCNav: Confidence-Aware Semantic Scene Completion for Visual Semantic Navigation☆45Updated 4 years ago
- Official implementation of NeurIPS 2022 paper "Learning Active Camera for Multi-Object Navigation"☆10Updated 2 years ago
- Code for training embodied agents using imitation learning at scale in Habitat-Lab☆44Updated 7 months ago
- Code and additional information for our paper entitled 'Scene Augmentation Methods for Interactive Embodied AI Tasks'☆10Updated 2 years ago
- Repository of our accepted NeurIPS-2022 paper "Towards Versatile Embodied Navigation"☆21Updated 2 years ago
- Pushing it out of the Way: Interactive Visual Navigation☆42Updated last year