lil-lab / drif
Dynamic Robot Instruction Following
☆35Updated 3 years ago
Alternatives and similar repositories for drif:
Users that are interested in drif are comparing it to the libraries listed below
- Cornell Instruction Following Framework☆34Updated 3 years ago
- PyTorch code for ICLR 2019 paper: Self-Monitoring Navigation Agent via Auxiliary Progress Estimation☆122Updated last year
- CLEVR-Robot: a reinforcement learning environment combining vision, language and control.☆133Updated 9 months ago
- Official code for the paper "Learning Transition Policies for Composing Complex Skills" (ICLR 2019)☆73Updated 6 years ago
- Train an RL agent to execute natural language instructions in a 3D Environment (PyTorch)☆236Updated 7 years ago
- Vision and Language Agent Navigation☆76Updated 4 years ago
- Modular multitask reinforcement learning with policy sketches☆108Updated 3 years ago
- Code Repository for Regression Planning Networks☆60Updated 9 months ago
- Entity Abstraction in Visual Model-Based Reinforcement Learning☆56Updated 4 years ago
- Cornell House Agent Learning Environment☆47Updated 2 years ago
- Repository containing code for the paper "IQA: Visual Question Answering in Interactive Environments"☆125Updated 5 years ago
- [ICLR 2018] Tensorflow/Keras code for Semi-parametric Topological Memory for Navigation☆104Updated 6 years ago
- Visual MPC implementation running on Rethink Sawyer Robot☆61Updated 5 years ago
- Code for "Auxiliary Tasks Speed Up Learning PointGoal Navigation"☆18Updated 4 years ago
- Code release for Fried et al., Speaker-Follower Models for Vision-and-Language Navigation. in NeurIPS, 2018.☆133Updated 2 years ago
- Baselines and memory-based scenarios for the ViZDoom simulator☆34Updated 2 years ago
- Code for "Tactical Rewind: Self-Correction via Backtracking in Vision-and-Language Navigation"☆61Updated 5 years ago
- Tensorflow models and simulation code for 'ShapeStacks: Learning Vision-Based Physical Intuition for Generalised Object Stacking'☆46Updated 2 years ago
- ☆33Updated 6 years ago
- Source code for our NIPS 2017 paper, InfoGAIL: Interpretable Imitation Learning from Visual Demonstrations☆42Updated 7 years ago
- ☆35Updated 5 years ago
- Visual Foresight: Model-Based Deep Reinforcement Learning for Vision-Based Robotic Control☆139Updated 2 years ago
- Grounded SCAN data set.☆69Updated 3 years ago
- Cooperative Vision-and-Dialog Navigation☆71Updated 2 years ago
- Cornell Touchdown natural language navigation and spatial reasoning dataset.☆99Updated 4 years ago
- [ICRA 2019] Propagation Networks for Model-based Control Under Partial Observation☆47Updated 6 years ago
- BabyAI++: Towards Grounded language Learning beyond Memorization, ICLR BeTR-RL 2020☆26Updated 4 years ago
- Customisable Unified Physical Simulations (CUPS) for Reinforcement Learning. Experiments run on the ai2thor environment (http://ai2thor.a…☆48Updated 5 years ago
- Reward Learning by Simulating the Past☆44Updated 6 years ago
- RoboVat: A unified toolkit for simulated and real-world robotic task environments.☆67Updated 2 years ago