Ram81 / habitat-web
Habitat-Web is a web application to collect human demonstrations for embodied tasks on Amazon Mechanical Turk (AMT) using the Habitat simulator.
☆54Updated 2 years ago
Alternatives and similar repositories for habitat-web
Users that are interested in habitat-web are comparing it to the libraries listed below
Sorting:
- Code for training embodied agents using IL and RL finetuning at scale for ObjectNav☆70Updated last month
- Code for training embodied agents using imitation learning at scale in Habitat-Lab☆40Updated last month
- Pytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation"☆78Updated 10 months ago
- python tools to work with habitat-sim environment.☆26Updated last year
- PONI: Potential Functions for ObjectGoal Navigation with Interaction-free Learning. CVPR 2022 (Oral).☆98Updated 2 years ago
- ZSON: Zero-Shot Object-Goal Navigation using Multimodal Goal Embeddings. NeurIPS 2022☆72Updated 2 years ago
- Code for reproducing the results of NeurIPS 2020 paper "MultiON: Benchmarking Semantic Map Memory using Multi-Object Navigation”☆50Updated 4 years ago
- ☆80Updated 3 years ago
- Zero Experience Required: Plug & Play Modular Transfer Learning for Semantic Visual Navigation. CVPR 2022☆31Updated 2 years ago
- Official implementation of the NRNS paper☆36Updated 2 years ago
- Official implementation of Learning from Unlabeled 3D Environments for Vision-and-Language Navigation (ECCV'22).☆41Updated 2 years ago
- ☆49Updated 3 years ago
- [ICCV'23] Learning Vision-and-Language Navigation from YouTube Videos☆55Updated 4 months ago
- [CVPR 2023] CoWs on Pasture: Baselines and Benchmarks for Language-Driven Zero-Shot Object Navigation☆128Updated last year
- Code repository for the Habitat Synthetic Scenes Dataset (HSSD) paper.☆88Updated 11 months ago
- Code and Data of the CVPR 2022 paper: Bridging the Gap Between Learning in Discrete and Continuous Environments for Vision-and-Language N…☆118Updated last year
- ☆33Updated last year
- Utility functions when working with Ai2-THOR. Try to do one thing once.☆45Updated 2 years ago
- Public release for "Explore until Confident: Efficient Exploration for Embodied Question Answering"☆55Updated 10 months ago
- Teaching robots to respond to open-vocab queries with CLIP and NeRF-like neural fields☆167Updated last year
- Environment Predictive Coding for Visual Navigation. ICLR 2022.☆15Updated 2 years ago
- ☆40Updated last year
- Code for sim-to-real transfer of a pretrained Vision-and-Language Navigation (VLN) agent to a robot using ROS.☆41Updated 4 years ago
- Implementation of our ICCV 2023 paper DREAMWALKER: Mental Planning for Continuous Vision-Language Navigation☆19Updated last year
- PoliFormer: Scaling On-Policy RL with Transformers Results in Masterful Navigators☆76Updated 5 months ago
- [ICCV 2023] PEANUT: Predicting and Navigating to Unseen Targets☆49Updated last year
- Python implementation of the paper Learning hierarchical relationships for object-goal navigation☆45Updated 2 years ago
- [CVPR 2023] We propose a framework for the challenging 3D-aware ObjectNav based on two straightforward sub-policies. The two sub-polices,…☆71Updated 11 months ago
- Official codebase for EmbCLIP☆125Updated last year
- 🔀 Visual Room Rearrangement☆113Updated last year