Sha-Lab / babywalk
PyTorch code for the ACL 2020 paper: "BabyWalk: Going Farther in Vision-and-Language Navigationby Taking Baby Steps"
☆41Updated 2 years ago
Alternatives and similar repositories for babywalk:
Users that are interested in babywalk are comparing it to the libraries listed below
- PyTorch code for ICLR 2019 paper: Self-Monitoring Navigation Agent via Auxiliary Progress Estimation☆120Updated last year
- Code for "Tactical Rewind: Self-Correction via Backtracking in Vision-and-Language Navigation"☆61Updated 5 years ago
- Cooperative Vision-and-Dialog Navigation☆68Updated 2 years ago
- Code for the paper "Improving Vision-and-Language Navigation with Image-Text Pairs from the Web" (ECCV 2020)☆53Updated 2 years ago
- PyTorch Code of NAACL 2019 paper "Learning to Navigate Unseen Environments: Back Translation with Environmental Dropout"☆126Updated 3 years ago
- Feature resources of "Diagnosing the Environment Bias in Vision-and-Language Navigation"☆17Updated 4 years ago
- Code and data of the Fine-Grained R2R Dataset proposed in the EMNLP 2021 paper Sub-Instruction Aware Vision-and-Language Navigation☆44Updated 3 years ago
- Vision and Language Agent Navigation☆75Updated 4 years ago
- Code release for Fried et al., Speaker-Follower Models for Vision-and-Language Navigation. in NeurIPS, 2018.☆132Updated 2 years ago
- Episodic Transformer (E.T.) is a novel attention-based architecture for vision-and-language navigation. E.T. is based on a multimodal tra…☆90Updated last year
- The repository of ECCV 2020 paper `Active Visual Information Gathering for Vision-Language Navigation`☆44Updated 2 years ago
- Code and models of MOCA (Modular Object-Centric Approach) proposed in "Factorizing Perception and Policy for Interactive Instruction Foll…☆37Updated 8 months ago
- Cornell Instruction Following Framework☆34Updated 3 years ago
- Dataset for Bilingual VLN☆11Updated 4 years ago
- Repository containing code for the paper "IQA: Visual Question Answering in Interactive Environments"☆123Updated 5 years ago
- Implementation of "Multimodal Text Style Transfer for Outdoor Vision-and-Language Navigation"☆25Updated 4 years ago
- large scale pretrain for navigation task☆89Updated 2 years ago
- PyTorch code for CVPR 2019 paper: The Regretful Agent: Heuristic-Aided Navigation through Progress Estimation☆124Updated last year
- Code of the NeurIPS 2021 paper: Language and Visual Entity Relationship Graph for Agent Navigation☆45Updated 3 years ago
- REVERIE: Remote Embodied Visual Referring Expression in Real Indoor Environments☆121Updated last year
- Code and Data for our CVPR 2021 paper "Structured Scene Memory for Vision-Language Navigation"☆38Updated 3 years ago
- Repository to generate CLEVR-Dialog: A diagnostic dataset for Visual Dialog☆46Updated 5 years ago
- PyTorch code for the paper: "Perceive, Transform, and Act: Multi-Modal Attention Networks for Vision-and-Language Navigation"☆19Updated 3 years ago
- code of the paper "Vision-Language Navigation with Multi-granularity Observation and Auxiliary Reasoning Tasks"☆23Updated 3 years ago
- Code for 'Chasing Ghosts: Instruction Following as Bayesian State Tracking' published at NeurIPS 2019☆10Updated 5 years ago
- Code of the CVPR 2021 Oral paper: A Recurrent Vision-and-Language BERT for Navigation☆165Updated 2 years ago
- Cornell Touchdown natural language navigation and spatial reasoning dataset.☆100Updated 4 years ago
- PyTorch code for BMVC 2019 paper: Embodied Vision-and-Language Navigation with Dynamic Convolutional Filters☆20Updated 2 years ago
- code for the paper "Adversarial Reinforced Instruction Attacker for Robust Vision-Language Navigation" (TPAMI 2021)☆11Updated 2 years ago
- Code for CVPR'19 "Recursive Visual Attention in Visual Dialog"☆64Updated last year