facebookresearch / EmbodiedQA
Train embodied agents that can answer questions in environments
☆301Updated last year
Alternatives and similar repositories for EmbodiedQA:
Users that are interested in EmbodiedQA are comparing it to the libraries listed below
- Code release for Fried et al., Speaker-Follower Models for Vision-and-Language Navigation. in NeurIPS, 2018.☆131Updated 2 years ago
- Train an RL agent to execute natural language instructions in a 3D Environment (PyTorch)☆236Updated 6 years ago
- Repository containing code for the paper "IQA: Visual Question Answering in Interactive Environments"☆123Updated 4 years ago
- PyTorch code for ICLR 2019 paper: Self-Monitoring Navigation Agent via Auxiliary Progress Estimation☆120Updated last year
- PyTorch code for Learning Cooperative Visual Dialog Agents using Deep Reinforcement Learning☆169Updated 6 years ago
- Vision and Language Agent Navigation☆73Updated 4 years ago
- Code release for Hu et al. Learning to Reason: End-to-End Module Networks for Visual Question Answering. in ICCV, 2017☆270Updated 4 years ago
- Code for "Tactical Rewind: Self-Correction via Backtracking in Vision-and-Language Navigation"☆61Updated 5 years ago
- Neural Module Network for VQA in Pytorch☆107Updated 7 years ago
- Learning to Learn how to Learn: Self-Adaptive Visual Navigation using Meta-Learning (https://arxiv.org/abs/1812.00971)☆184Updated 5 years ago
- Implementation for the paper "Compositional Attention Networks for Machine Reasoning" (Hudson and Manning, ICLR 2018)☆497Updated 3 years ago
- PyTorch Code of NAACL 2019 paper "Learning to Navigate Unseen Environments: Back Translation with Environmental Dropout"☆125Updated 3 years ago
- PyTorch code for CVPR 2019 paper: The Regretful Agent: Heuristic-Aided Navigation through Progress Estimation☆124Updated last year
- Cornell Touchdown natural language navigation and spatial reasoning dataset.☆99Updated 4 years ago
- Starter code in PyTorch for the Visual Dialog challenge☆192Updated last year
- This repository provides code for reproducing experiments of the paper Talk The Walk: Navigating New York City Through Grounded Dialogue …☆111Updated 3 years ago
- PyTorch code for the ACL 2020 paper: "BabyWalk: Going Farther in Vision-and-Language Navigationby Taking Baby Steps"☆40Updated 2 years ago
- MAttNet: Modular Attention Network for Referring Expression Comprehension☆293Updated 2 years ago
- [CVPR 2017] AMT chat interface code used to collect the Visual Dialog dataset☆79Updated 2 years ago
- CoDraw dataset☆93Updated 6 years ago
- [ICLR 2018] TensorFlow code for zero-shot visual imitation by self-supervised exploration☆203Updated 6 years ago
- Code for the habitat challenge☆315Updated last year
- [ICLR 2018] Tensorflow/Keras code for Semi-parametric Topological Memory for Navigation☆104Updated 5 years ago
- Recognition to Cognition Networks (code for the model in "From Recognition to Cognition: Visual Commonsense Reasoning", CVPR 2019)☆465Updated 3 years ago
- Pytorch implementation of winner from VQA Chllange Workshop in CVPR'17☆163Updated 5 years ago
- Room-across-Room (RxR) is a large-scale, multilingual dataset for Vision-and-Language Navigation (VLN) in Matterport3D environments. It c…☆124Updated last year
- Visual Question Answering Project with state of the art single Model performance.☆131Updated 6 years ago
- We release dataset collected for our research, code that implement neural network models described in the paper, and scripts to reproduce…☆161Updated 3 years ago
- ICML 2018 Self-Imitation Learning☆274Updated 4 years ago
- Neural-symbolic visual question answering☆262Updated last year