facebookresearch / EmbodiedQALinks
Train embodied agents that can answer questions in environments
☆315Updated 2 years ago
Alternatives and similar repositories for EmbodiedQA
Users that are interested in EmbodiedQA are comparing it to the libraries listed below
Sorting:
- Repository containing code for the paper "IQA: Visual Question Answering in Interactive Environments"☆126Updated 5 years ago
- Train an RL agent to execute natural language instructions in a 3D Environment (PyTorch)☆238Updated 7 years ago
- PyTorch code for ICLR 2019 paper: Self-Monitoring Navigation Agent via Auxiliary Progress Estimation☆122Updated 2 years ago
- Code release for Fried et al., Speaker-Follower Models for Vision-and-Language Navigation. in NeurIPS, 2018.☆139Updated 3 years ago
- PyTorch code for Learning Cooperative Visual Dialog Agents using Deep Reinforcement Learning☆169Updated 7 years ago
- Code release for Hu et al. Learning to Reason: End-to-End Module Networks for Visual Question Answering. in ICCV, 2017☆272Updated 5 years ago
- Implementation for the paper "Compositional Attention Networks for Machine Reasoning" (Hudson and Manning, ICLR 2018)☆511Updated 4 years ago
- Cornell Touchdown natural language navigation and spatial reasoning dataset.☆105Updated 5 years ago
- Code for "Tactical Rewind: Self-Correction via Backtracking in Vision-and-Language Navigation"☆62Updated 6 years ago
- This repository provides code for reproducing experiments of the paper Talk The Walk: Navigating New York City Through Grounded Dialogue …☆110Updated 4 years ago
- Vision and Language Agent Navigation☆82Updated 4 years ago
- PyTorch Code of NAACL 2019 paper "Learning to Navigate Unseen Environments: Back Translation with Environmental Dropout"☆144Updated 4 years ago
- Neural-symbolic visual question answering☆279Updated 2 years ago
- A Diagnostic Dataset for Compositional Language and Elementary Visual Reasoning☆638Updated 4 years ago
- Neural Module Network for VQA in Pytorch☆107Updated 8 years ago
- Starter code in PyTorch for the Visual Dialog challenge☆189Updated 2 years ago
- Learning to Learn how to Learn: Self-Adaptive Visual Navigation using Meta-Learning (https://arxiv.org/abs/1812.00971)☆194Updated 2 weeks ago
- PyTorch code for CVPR 2019 paper: The Regretful Agent: Heuristic-Aided Navigation through Progress Estimation☆125Updated 2 years ago
- [CVPR 2017] AMT chat interface code used to collect the Visual Dialog dataset☆78Updated 3 years ago
- PyTorch code for the ACL 2020 paper: "BabyWalk: Going Farther in Vision-and-Language Navigationby Taking Baby Steps"☆42Updated 3 years ago
- Code for the habitat challenge☆344Updated 2 years ago
- PyTorch implementation of "Transparency by Design: Closing the Gap Between Performance and Interpretability in Visual Reasoning"☆348Updated 4 years ago
- visual dialog model in pytorch☆109Updated 7 years ago
- [ICLR 2018] TensorFlow code for zero-shot visual imitation by self-supervised exploration☆203Updated 7 years ago
- MAttNet: Modular Attention Network for Referring Expression Comprehension☆297Updated 3 years ago
- Cornell Instruction Following Framework☆34Updated 4 years ago
- Code for ICML 2019 paper "Probabilistic Neural-symbolic Models for Interpretable Visual Question Answering" [long-oral]☆67Updated 2 years ago
- Recognition to Cognition Networks (code for the model in "From Recognition to Cognition: Visual Commonsense Reasoning", CVPR 2019)☆470Updated 4 years ago
- CoDraw dataset☆93Updated 6 years ago
- Attention-based Visual Question Answering in Torch☆101Updated 8 years ago