facebookresearch / habitat-challenge
Code for the habitat challenge
☆315Updated last year
Alternatives and similar repositories for habitat-challenge:
Users that are interested in habitat-challenge are comparing it to the libraries listed below
- An open source framework for research in Embodied-AI from AI2.☆326Updated last month
- A Simulation Environment to train Robots in Large Realistic Interactive Scenes☆686Updated 7 months ago
- Vision-and-Language Navigation in Continuous Environments using Habitat☆344Updated last month
- This repository contains code for our publication "Occupancy Anticipation for Efficient Exploration and Navigation" in ECCV 2020.☆79Updated last year
- RoboTHOR Challenge☆85Updated 3 years ago
- Pytorch code for NeurIPS-20 Paper "Object Goal Navigation using Goal-Oriented Semantic Exploration"☆344Updated last year
- ☆170Updated last year
- A curated list of research papers in Vision-Language Navigation (VLN)☆192Updated 9 months ago
- ☆79Updated 2 years ago
- Learning to Learn how to Learn: Self-Adaptive Visual Navigation using Meta-Learning (https://arxiv.org/abs/1812.00971)☆184Updated 5 years ago
- Reading list for research topics in embodied vision☆565Updated this week
- Room-across-Room (RxR) is a large-scale, multilingual dataset for Vision-and-Language Navigation (VLN) in Matterport3D environments. It c…☆128Updated last year
- Gibson Environments: Real-World Perception for Embodied Agents☆884Updated 10 months ago
- (ICLR 2019) Learning Exploration Policies for Navigation☆103Updated 5 years ago
- 🔀 Visual Room Rearrangement☆106Updated last year
- ☆391Updated last year
- ☆56Updated 3 years ago
- AI Research Platform for Reinforcement Learning from Real Panoramic Images.☆544Updated 7 months ago
- BenchBot is a tool for seamlessly testing & evaluating semantic scene understanding tools in both realistic 3D simulation & on real robot…☆111Updated last year
- Code release for Fried et al., Speaker-Follower Models for Vision-and-Language Navigation. in NeurIPS, 2018.☆132Updated 2 years ago
- Train robotic agents to learn pick and place with deep learning for vision-based manipulation in PyBullet. Transporter Nets, CoRL 2020.☆590Updated 6 months ago
- ALFRED - A Benchmark for Interpreting Grounded Instructions for Everyday Tasks☆395Updated 6 months ago
- MINOS: Multimodal Indoor Simulator☆203Updated 2 years ago
- This paper contains code for our work "An Exploration of Embodied Visual Exploration".☆65Updated 3 years ago
- Episodic Transformer (E.T.) is a novel attention-based architecture for vision-and-language navigation. E.T. is based on a multimodal tra…☆90Updated last year
- Official codebase for EmbCLIP☆117Updated last year
- Pytorch code for ICLR-20 Paper "Learning to Explore using Active Neural SLAM"☆779Updated 7 months ago
- ☆217Updated last month
- Code for training embodied agents using imitation learning at scale in Habitat-Lab☆36Updated 2 years ago
- Code of the CVPR 2021 Oral paper: A Recurrent Vision-and-Language BERT for Navigation☆161Updated 2 years ago