VegB / VLN-TransformerLinks
Implementation of "Multimodal Text Style Transfer for Outdoor Vision-and-Language Navigation"
☆26Updated 4 years ago
Alternatives and similar repositories for VLN-Transformer
Users that are interested in VLN-Transformer are comparing it to the libraries listed below
Sorting:
- Cooperative Vision-and-Dialog Navigation☆71Updated 2 years ago
- Feature resources of "Diagnosing the Environment Bias in Vision-and-Language Navigation"☆16Updated 5 years ago
- Code and Data for our CVPR 2021 paper "Structured Scene Memory for Vision-Language Navigation"☆41Updated 4 years ago
- Code for the paper "Improving Vision-and-Language Navigation with Image-Text Pairs from the Web" (ECCV 2020)☆57Updated 3 years ago
- REVERIE: Remote Embodied Visual Referring Expression in Real Indoor Environments☆138Updated 2 years ago
- large scale pretrain for navigation task☆92Updated 2 years ago
- Code of the NeurIPS 2021 paper: Language and Visual Entity Relationship Graph for Agent Navigation☆46Updated 3 years ago
- The repository of ECCV 2020 paper `Active Visual Information Gathering for Vision-Language Navigation`☆43Updated 3 years ago
- PyTorch Code of NAACL 2019 paper "Learning to Navigate Unseen Environments: Back Translation with Environmental Dropout"☆136Updated 4 years ago
- Code of the CVPR 2021 Oral paper: A Recurrent Vision-and-Language BERT for Navigation☆192Updated 3 years ago
- Code and models of MOCA (Modular Object-Centric Approach) proposed in "Factorizing Perception and Policy for Interactive Instruction Foll…☆38Updated last year
- Code of the CVPR 2022 paper "HOP: History-and-Order Aware Pre-training for Vision-and-Language Navigation"☆30Updated 2 years ago
- Code and data of the Fine-Grained R2R Dataset proposed in the EMNLP 2021 paper Sub-Instruction Aware Vision-and-Language Navigation☆51Updated 4 years ago
- Pytorch Code and Data for EnvEdit: Environment Editing for Vision-and-Language Navigation (CVPR 2022)☆30Updated 3 years ago
- Episodic Transformer (E.T.) is a novel attention-based architecture for vision-and-language navigation. E.T. is based on a multimodal tra…☆91Updated 2 years ago
- Pytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation"☆84Updated last year
- PyTorch code for ICLR 2019 paper: Self-Monitoring Navigation Agent via Auxiliary Progress Estimation☆122Updated 2 years ago
- PyTorch code for the ACL 2020 paper: "BabyWalk: Going Farther in Vision-and-Language Navigationby Taking Baby Steps"☆42Updated 3 years ago
- Code for "Tactical Rewind: Self-Correction via Backtracking in Vision-and-Language Navigation"☆61Updated 6 years ago
- Know What and Know Where: An Object-and-Room Informed Sequential BERT for Indoor Vision-Language Navigation☆16Updated 3 years ago
- [ECCV 2022] Official pytorch implementation of the paper "FedVLN: Privacy-preserving Federated Vision-and-Language Navigation"☆13Updated 3 years ago
- Official implementation of History Aware Multimodal Transformer for Vision-and-Language Navigation (NeurIPS'21).☆132Updated 2 years ago
- Vision and Language Agent Navigation☆82Updated 4 years ago
- ☆20Updated 3 years ago
- Official implementation of Layout-aware Dreamer for Embodied Referring Expression Grounding [AAAI 23].☆16Updated 2 years ago
- ☆44Updated 3 years ago
- A curated list of research papers in Vision-Language Navigation (VLN)☆226Updated last year
- Official Repository of NeurIPS2021 paper: PTR☆32Updated 3 years ago
- Code for NeurIPS 2021 paper "Curriculum Learning for Vision-and-Language Navigation"☆14Updated 2 years ago
- Repository of our accepted NeurIPS-2022 paper "Towards Versatile Embodied Navigation"☆21Updated 2 years ago