saimwani / multiONLinks
Code for reproducing the results of NeurIPS 2020 paper "MultiON: Benchmarking Semantic Map Memory using Multi-Object Navigation”
☆52Updated 4 years ago
Alternatives and similar repositories for multiON
Users that are interested in multiON are comparing it to the libraries listed below
Sorting:
- Code for training embodied agents using IL and RL finetuning at scale for ObjectNav☆75Updated 3 months ago
- Habitat-Web is a web application to collect human demonstrations for embodied tasks on Amazon Mechanical Turk (AMT) using the Habitat sim…☆57Updated 3 years ago
- Official GitHub Repository for paper "Visual Graph Memory with Unsupervised Representation for Visual Navigation", ICCV 2021☆65Updated 8 months ago
- PONI: Potential Functions for ObjectGoal Navigation with Interaction-free Learning. CVPR 2022 (Oral).☆101Updated 2 years ago
- Zero Experience Required: Plug & Play Modular Transfer Learning for Semantic Visual Navigation. CVPR 2022☆31Updated 2 years ago
- ZSON: Zero-Shot Object-Goal Navigation using Multimodal Goal Embeddings. NeurIPS 2022☆77Updated 2 years ago
- ☆36Updated 4 years ago
- ☆80Updated 3 years ago
- Pytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation"☆79Updated last year
- ☆22Updated last year
- Python implementation of the paper Learning hierarchical relationships for object-goal navigation☆46Updated 2 years ago
- ☆40Updated 2 years ago
- ☆45Updated 3 years ago
- ☆34Updated 3 years ago
- Code for training embodied agents using imitation learning at scale in Habitat-Lab☆42Updated 3 months ago
- ☆51Updated 3 years ago
- Official implementation of the NRNS paper☆36Updated 3 years ago
- Official code release for "Navigation with Large Language Models: Semantic Guesswork as a Heuristic for Planning"☆51Updated last year
- Resources for Auxiliary Tasks and Exploration Enable ObjectNav☆40Updated 3 years ago
- Code for sim-to-real transfer of a pretrained Vision-and-Language Navigation (VLN) agent to a robot using ROS.☆44Updated 4 years ago
- Code and Data of the CVPR 2022 paper: Bridging the Gap Between Learning in Discrete and Continuous Environments for Vision-and-Language N…☆127Updated last year
- Code for LGX (Language Guided Exploration). We use LLMs to perform embodied robot navigation in a zero-shot manner.☆64Updated last year
- [ICCV'23] Learning Vision-and-Language Navigation from YouTube Videos☆57Updated 6 months ago
- [CVPR 2023] CoWs on Pasture: Baselines and Benchmarks for Language-Driven Zero-Shot Object Navigation☆137Updated last year
- Official Implementation of IVLN-CE: Iterative Vision-and-Language Navigation in Continuous Environments