facebookresearch / habitat-challengeLinks
Code for the habitat challenge
☆332Updated 2 years ago
Alternatives and similar repositories for habitat-challenge
Users that are interested in habitat-challenge are comparing it to the libraries listed below
Sorting:
- An open source framework for research in Embodied-AI from AI2.☆354Updated 5 months ago
- RoboTHOR Challenge☆91Updated 4 years ago
- Pytorch code for NeurIPS-20 Paper "Object Goal Navigation using Goal-Oriented Semantic Exploration"☆393Updated last year
- This repository contains code for our publication "Occupancy Anticipation for Efficient Exploration and Navigation" in ECCV 2020.☆80Updated 2 years ago
- Vision-and-Language Navigation in Continuous Environments using Habitat☆465Updated 5 months ago
- ☆80Updated 3 years ago
- Gibson Environments: Real-World Perception for Embodied Agents☆911Updated last year
- A Simulation Environment to train Robots in Large Realistic Interactive Scenes☆748Updated last year
- Room-across-Room (RxR) is a large-scale, multilingual dataset for Vision-and-Language Navigation (VLN) in Matterport3D environments. It c…☆155Updated last year
- Reading list for research topics in embodied vision☆630Updated 2 weeks ago
- Train embodied agents that can answer questions in environments☆307Updated last year
- Train robotic agents to learn pick and place with deep learning for vision-based manipulation in PyBullet. Transporter Nets, CoRL 2020.☆597Updated 10 months ago
- ☆174Updated 2 years ago
- ☆461Updated 2 years ago
- 🔀 Visual Room Rearrangement☆117Updated last year
- Learning to Learn how to Learn: Self-Adaptive Visual Navigation using Meta-Learning (https://arxiv.org/abs/1812.00971)☆188Updated 5 years ago
- ☆56Updated 4 years ago
- AI Research Platform for Reinforcement Learning from Real Panoramic Images.☆601Updated 11 months ago
- Official codebase for EmbCLIP☆126Updated 2 years ago
- (ICLR 2019) Learning Exploration Policies for Navigation☆106Updated 6 years ago
- Code for training embodied agents using imitation learning at scale in Habitat-Lab☆42Updated 2 months ago
- This repository contains code to reproduce experimental results from our HM3D paper in NeurIPS 2021.☆167Updated 3 years ago
- ☆234Updated 5 months ago
- A curated list of research papers in Vision-Language Navigation (VLN)☆218Updated last year
- Suite of human-collected datasets and a multi-task continuous control benchmark for open vocabulary visuolinguomotor learning.☆317Updated last week
- Masked Visual Pre-training for Robotics☆234Updated 2 years ago
- Teaching robots to respond to open-vocab queries with CLIP and NeRF-like neural fields☆172Updated last year
- The data skeleton from "3D Scene Graph: A Structure for Unified Semantics, 3D Space, and Camera" http://3dscenegraph.stanford.edu☆278Updated 11 months ago
- Episodic Transformer (E.T.) is a novel attention-based architecture for vision-and-language navigation. E.T. is based on a multimodal tra…☆90Updated last year
- Code of the CVPR 2021 Oral paper: A Recurrent Vision-and-Language BERT for Navigation☆182Updated 2 years ago