facebookresearch / habitat-challengeLinks
Code for the habitat challenge
☆344Updated 2 years ago
Alternatives and similar repositories for habitat-challenge
Users that are interested in habitat-challenge are comparing it to the libraries listed below
Sorting:
- This repository contains code for our publication "Occupancy Anticipation for Efficient Exploration and Navigation" in ECCV 2020.☆79Updated 2 years ago
- RoboTHOR Challenge☆96Updated 4 years ago
- An open source framework for research in Embodied-AI from AI2.☆373Updated 4 months ago
- ☆82Updated 3 years ago
- Room-across-Room (RxR) is a large-scale, multilingual dataset for Vision-and-Language Navigation (VLN) in Matterport3D environments. It c…☆167Updated 2 years ago
- A Simulation Environment to train Robots in Large Realistic Interactive Scenes☆784Updated last year
- Pytorch code for NeurIPS-20 Paper "Object Goal Navigation using Goal-Oriented Semantic Exploration"☆429Updated 2 years ago
- Learning to Learn how to Learn: Self-Adaptive Visual Navigation using Meta-Learning (https://arxiv.org/abs/1812.00971)☆194Updated 2 weeks ago
- AI Research Platform for Reinforcement Learning from Real Panoramic Images.☆656Updated last year
- This paper contains code for our work "An Exploration of Embodied Visual Exploration".☆65Updated 4 years ago
- Resources for Auxiliary Tasks and Exploration Enable ObjectNav☆40Updated 4 years ago
- (ICLR 2019) Learning Exploration Policies for Navigation☆106Updated 6 years ago
- Code for training embodied agents using imitation learning at scale in Habitat-Lab☆43Updated 8 months ago
- A curated list of research papers in Vision-Language Navigation (VLN)☆231Updated last year
- 🔀 Visual Room Rearrangement☆124Updated 2 years ago
- Habitat-Web is a web application to collect human demonstrations for embodied tasks on Amazon Mechanical Turk (AMT) using the Habitat sim…☆59Updated 3 years ago
- Reading list for research topics in embodied vision☆688Updated 6 months ago
- Gibson Environments: Real-World Perception for Embodied Agents☆931Updated last year
- ☆44Updated 3 years ago
- ☆177Updated 2 years ago
- Code for reproducing the results of NeurIPS 2020 paper "MultiON: Benchmarking Semantic Map Memory using Multi-Object Navigation”☆55Updated 5 years ago
- Vision-and-Language Navigation in Continuous Environments using Habitat☆659Updated 11 months ago
- ☆57Updated 4 years ago
- The data skeleton from "3D Scene Graph: A Structure for Unified Semantics, 3D Space, and Camera" http://3dscenegraph.stanford.edu☆302Updated last year
- ☆575Updated 2 years ago
- Pytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation"☆87Updated last year
- Train robotic agents to learn pick and place with deep learning for vision-based manipulation in PyBullet. Transporter Nets, CoRL 2020.☆618Updated last year
- Train embodied agents that can answer questions in environments☆313Updated 2 years ago
- Masked Visual Pre-training for Robotics☆243Updated 2 years ago
- Episodic Transformer (E.T.) is a novel attention-based architecture for vision-and-language navigation. E.T. is based on a multimodal tra…☆93Updated 2 years ago