Ram81 / habitat-web
Habitat-Web is a web application to collect human demonstrations for embodied tasks on Amazon Mechanical Turk (AMT) using the Habitat simulator.
☆52Updated 2 years ago
Alternatives and similar repositories for habitat-web:
Users that are interested in habitat-web are comparing it to the libraries listed below
- Code for reproducing the results of NeurIPS 2020 paper "MultiON: Benchmarking Semantic Map Memory using Multi-Object Navigation”☆48Updated 4 years ago
- Code for training embodied agents using IL and RL finetuning at scale for ObjectNav☆63Updated last year
- Code for training embodied agents using imitation learning at scale in Habitat-Lab☆36Updated 2 years ago
- Pytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation"☆73Updated 7 months ago
- Official implementation of the NRNS paper☆35Updated 2 years ago
- ☆79Updated 2 years ago
- Zero Experience Required: Plug & Play Modular Transfer Learning for Semantic Visual Navigation. CVPR 2022☆31Updated 2 years ago
- PONI: Potential Functions for ObjectGoal Navigation with Interaction-free Learning. CVPR 2022 (Oral).☆89Updated 2 years ago
- [ICCV 2023] PEANUT: Predicting and Navigating to Unseen Targets☆43Updated 11 months ago
- Python implementation of the paper Learning hierarchical relationships for object-goal navigation☆44Updated 2 years ago
- IsaacSim Extension for Dynamic Objects in Matterport3D Environments for AdaVLN research☆21Updated 2 months ago
- This repository contains code for our publication "Occupancy Anticipation for Efficient Exploration and Navigation" in ECCV 2020.☆79Updated last year
- Pushing it out of the Way: Interactive Visual Navigation☆35Updated last year
- Resources for Auxiliary Tasks and Exploration Enable ObjectNav☆40Updated 3 years ago
- Implementation of our ICCV 2023 paper DREAMWALKER: Mental Planning for Continuous Vision-Language Navigation☆19Updated last year
- ☆39Updated 2 years ago
- ZSON: Zero-Shot Object-Goal Navigation using Multimodal Goal Embeddings. NeurIPS 2022☆66Updated 2 years ago
- Utility functions when working with Ai2-THOR. Try to do one thing once.☆45Updated 2 years ago
- Repository of our accepted NeurIPS-2022 paper "Towards Versatile Embodied Navigation"☆20Updated 2 years ago
- public video dqn code☆27Updated 2 years ago
- Official implementation of Learning from Unlabeled 3D Environments for Vision-and-Language Navigation (ECCV'22).☆39Updated last year
- ☆34Updated last year
- 🔀 Visual Room Rearrangement☆106Updated last year
- Official Implementation of IVLN-CE: Iterative Vision-and-Language Navigation in Continuous Environments☆30Updated last year
- ☆46Updated 2 years ago
- [ICRA2023] Grounding Language with Visual Affordances over Unstructured Data☆38Updated last year
- code for TIDEE: Novel Room Reorganization using Visuo-Semantic Common Sense Priors☆37Updated last year
- RL training scripts for learning an agent using ProcTHOR.☆20Updated 9 months ago
- [ICCV'23] Learning Vision-and-Language Navigation from YouTube Videos☆48Updated last month