google-deepmind / streetlearnLinks
A C++/Python implementation of the StreetLearn environment based on images from Street View, as well as a TensorFlow implementation of goal-driven navigation agents solving the task published in “Learning to Navigate in Cities Without a Map”, NeurIPS 2018
☆313Updated 5 years ago
Alternatives and similar repositories for streetlearn
Users that are interested in streetlearn are comparing it to the libraries listed below
Sorting:
- (ICLR 2019) Learning Exploration Policies for Navigation☆105Updated 6 years ago
- [ICLR 2018] Tensorflow/Keras code for Semi-parametric Topological Memory for Navigation☆105Updated 6 years ago
- MINOS: Multimodal Indoor Simulator☆203Updated 2 years ago
- Code for the habitat challenge☆344Updated 2 years ago
- Train an RL agent to localize actively (PyTorch)☆211Updated 7 years ago
- A research framework for autonomous driving☆201Updated 2 years ago
- Learning to Learn how to Learn: Self-Adaptive Visual Navigation using Meta-Learning (https://arxiv.org/abs/1812.00971)☆193Updated 6 years ago
- Code for "Benchmarking Classic and Learned Navigation in Complex 3D Environments" paper☆65Updated 6 years ago
- ☆105Updated 7 years ago
- ☆57Updated 4 years ago
- Gated Path Planning Networks (ICML 2018)☆180Updated 6 years ago
- Design multi-agent environments and simple reward functions such that social driving behavior emerges☆134Updated 4 years ago
- Target-driven Visual Navigation in Indoor Scenes using Deep Reinforcement Learning☆23Updated 8 years ago
- This repository contains code for our publication "Occupancy Anticipation for Efficient Exploration and Navigation" in ECCV 2020.☆79Updated 2 years ago
- Repository for the paper "Planning to Explore via Self-Supervised World Models"☆227Updated 2 years ago
- ☆177Updated 2 years ago
- Open-Source Distributed Reinforcement Learning Framework by Stanford Vision and Learning Lab☆493Updated 5 years ago
- [Reimplementation Ross et al 2011] An implementation of DAGGER using ConvNets for driving from pixels.☆83Updated 7 years ago
- [NIPS 2017] InfoGAIL: Interpretable Imitation Learning from Visual Demonstrations☆182Updated last year
- A Simple Example for Imitation Learning with Dataset Aggregation (DAGGER) on Torcs Env☆70Updated 8 years ago
- The multi-agent version of TORCS for developing control algorithms for fully autonomous driving in the cluttered, multi-agent settings of…☆145Updated 6 years ago
- ☆32Updated 7 years ago
- Source code for our NIPS 2017 paper, InfoGAIL: Interpretable Imitation Learning from Visual Demonstrations☆42Updated 8 years ago
- CLEVR-Robot: a reinforcement learning environment combining vision, language and control.☆138Updated last year
- Datasets used to train Generative Query Networks (GQNs) in the ‘Neural Scene Representation and Rendering’ paper.☆273Updated 3 years ago
- My reading list for model-based control☆161Updated 6 years ago
- Code for the paper "OpenAI Remote Rendering Backend"☆247Updated 2 years ago
- Mid-Level Visual Representations Improve Generalization and Sample Efficiency for Learning Visuomotor Policies☆108Updated 2 years ago
- Train an RL agent to execute natural language instructions in a 3D Environment (PyTorch)☆238Updated 7 years ago
- [ICML 2019] TensorFlow Code for Self-Supervised Exploration via Disagreement☆129Updated 6 years ago