aimagelab / LoCoNavLinks
☆13Updated 3 years ago
Alternatives and similar repositories for LoCoNav
Users that are interested in LoCoNav are comparing it to the libraries listed below
Sorting:
- ☆40Updated 3 years ago
- public video dqn code☆28Updated 2 years ago
- Python implementation of the paper Learning hierarchical relationships for object-goal navigation☆48Updated 3 years ago
- Code for reproducing the results of NeurIPS 2020 paper "MultiON: Benchmarking Semantic Map Memory using Multi-Object Navigation”☆56Updated 5 years ago
- [ICCV'21] Curious Representation Learning for Embodied Intelligence☆31Updated 4 years ago
- Code for training embodied agents using imitation learning at scale in Habitat-Lab☆42Updated 9 months ago
- This repository contains code for our publication "Occupancy Anticipation for Efficient Exploration and Navigation" in ECCV 2020.☆80Updated 2 years ago
- ☆37Updated 4 years ago
- ☆11Updated 6 years ago
- Pytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation"☆88Updated last year
- Resources for Auxiliary Tasks and Exploration Enable ObjectNav☆41Updated 4 years ago
- Code and data of the Fine-Grained R2R Dataset proposed in the EMNLP 2021 paper Sub-Instruction Aware Vision-and-Language Navigation☆56Updated 4 years ago
- Pushing it out of the Way: Interactive Visual Navigation☆44Updated 2 years ago
- Navigation agent with Bayesian relational memory in the House3D environment☆30Updated 6 years ago
- RoboTHOR Challenge☆97Updated 4 years ago
- Learning to Learn how to Learn: Self-Adaptive Visual Navigation using Meta-Learning (https://arxiv.org/abs/1812.00971)☆194Updated 3 weeks ago
- This paper contains code for our work "An Exploration of Embodied Visual Exploration".☆66Updated 4 years ago
- ☆84Updated 3 years ago
- PONI: Potential Functions for ObjectGoal Navigation with Interaction-free Learning. CVPR 2022 (Oral).☆114Updated 3 years ago
- 3D household task-based dataset created using customised AI2-THOR.☆14Updated 3 years ago
- Code for the paper Watch-And-Help: A Challenge for Social Perception and Human-AI Collaboration☆105Updated 3 years ago
- Habitat-Web is a web application to collect human demonstrations for embodied tasks on Amazon Mechanical Turk (AMT) using the Habitat sim…☆59Updated 3 years ago
- Official code for the ACL 2021 Findings paper "Yichi Zhang and Joyce Chai. Hierarchical Task Learning from Language Instructions with Uni…☆24Updated 4 years ago
- ManipulaTHOR, a framework that facilitates visual manipulation of objects using a robotic arm☆96Updated 2 years ago
- Episodic Transformer (E.T.) is a novel attention-based architecture for vision-and-language navigation. E.T. is based on a multimodal tra…☆93Updated 2 years ago
- Utility functions when working with Ai2-THOR. Try to do one thing once.☆55Updated 3 years ago
- The ProcTHOR-10K Houses Dataset☆117Updated 3 years ago
- Official GitHub Repository for paper "Visual Graph Memory with Unsupervised Representation for Visual Navigation", ICCV 2021☆66Updated 2 months ago
- Code for "Learning Affordance Landscapes for Interaction Exploration in 3D Environments" (NeurIPS 20)☆38Updated 2 years ago
- 🔀 Visual Room Rearrangement☆125Updated 2 years ago