ziadalh / zero_experience_requiredLinks
Zero Experience Required: Plug & Play Modular Transfer Learning for Semantic Visual Navigation. CVPR 2022
☆31Updated 2 years ago
Alternatives and similar repositories for zero_experience_required
Users that are interested in zero_experience_required are comparing it to the libraries listed below
Sorting:
- ☆33Updated 2 years ago
- Official implementation of NeurIPS 2022 paper "Learning Active Camera for Multi-Object Navigation"☆10Updated 2 years ago
- Official implementation of the NRNS paper☆36Updated 2 years ago
- ☆36Updated 3 years ago
- Python implementation of the paper Learning hierarchical relationships for object-goal navigation☆46Updated 2 years ago
- Dual Adaptive Thinking (DAT) for object navigation☆12Updated 2 years ago
- PONI: Potential Functions for ObjectGoal Navigation with Interaction-free Learning. CVPR 2022 (Oral).☆98Updated 2 years ago
- ☆40Updated 2 years ago
- Unbiased Directed Object Attention Graph for Object Navigation☆14Updated 2 years ago
- Code for reproducing the results of NeurIPS 2020 paper "MultiON: Benchmarking Semantic Map Memory using Multi-Object Navigation”☆51Updated 4 years ago
- official implementation for ECCV 2024 paper "Prioritized Semantic Learning for Zero-shot Instance Navigation"☆36Updated 8 months ago
- ☆22Updated 10 months ago
- Code for training embodied agents using IL and RL finetuning at scale for ObjectNav☆69Updated last month
- [CVPR 2023] We propose a framework for the challenging 3D-aware ObjectNav based on two straightforward sub-policies. The two sub-polices,…☆72Updated last year
- Official code release for "Navigation with Large Language Models: Semantic Guesswork as a Heuristic for Planning"☆50Updated last year
- ☆50Updated 3 years ago
- Open Vocabulary Object Navigation☆76Updated 3 weeks ago
- Aligning Knowledge Graph with Visual Perception for Object-goal Navigation (ICRA 2024)☆33Updated 2 months ago
- Code and Data for Paper: Boosting Efficient Reinforcement Learning for Vision-and-Language Navigation With Open-Sourced LLM☆11Updated 3 months ago
- Code for LGX (Language Guided Exploration). We use LLMs to perform embodied robot navigation in a zero-shot manner.☆62Updated last year
- [ICRA 2021] SSCNav: Confidence-Aware Semantic Scene Completion for Visual Semantic Navigation☆44Updated 4 years ago
- python tools to work with habitat-sim environment.☆28Updated last year
- Official implementation of GridMM: Grid Memory Map for Vision-and-Language Navigation (ICCV'23).☆90Updated last year
- ☆41Updated last year
- official implementation of NeurIPS 2023 paper "FGPrompt: Fine-grained Goal Prompting for Image-goal Navigation"☆33Updated last year
- Official implementation of Learning from Unlabeled 3D Environments for Vision-and-Language Navigation (ECCV'22).☆41Updated 2 years ago
- ☆25Updated 3 years ago
- ☆29Updated 2 years ago
- Code for training embodied agents using imitation learning at scale in Habitat-Lab☆42Updated last month
- repository for "Exploiting Proximity-Aware Tasks for Embodied Social Navigation" paper code☆9Updated last year