expectorlin / DR-AttackerLinks
code for the paper "Adversarial Reinforced Instruction Attacker for Robust Vision-Language Navigation" (TPAMI 2021)
☆11Updated 3 years ago
Alternatives and similar repositories for DR-Attacker
Users that are interested in DR-Attacker are comparing it to the libraries listed below
Sorting:
- Code of the NeurIPS 2021 paper: Language and Visual Entity Relationship Graph for Agent Navigation☆45Updated 3 years ago
- ☆13Updated 3 years ago
- Repository of our accepted NeurIPS-2022 paper "Towards Versatile Embodied Navigation"☆21Updated 2 years ago
- ☆12Updated 2 years ago
- Code for the paper "Improving Vision-and-Language Navigation with Image-Text Pairs from the Web" (ECCV 2020)☆56Updated 2 years ago
- Code and data of the Fine-Grained R2R Dataset proposed in the EMNLP 2021 paper Sub-Instruction Aware Vision-and-Language Navigation☆48Updated 3 years ago
- Codebase for the Airbert paper☆45Updated 2 years ago
- [ACM MM 2022] Target-Driven Structured Transformer Planner for Vision-Language Navigation☆15Updated 2 years ago
- Code for CVPR22 paper One Step at a Time: Long-Horizon Vision-and-Language Navigation with Milestones☆13Updated 2 years ago
- Episodic Transformer (E.T.) is a novel attention-based architecture for vision-and-language navigation. E.T. is based on a multimodal tra…☆90Updated 2 years ago
- Code of the CVPR 2022 paper "HOP: History-and-Order Aware Pre-training for Vision-and-Language Navigation"☆30Updated last year
- Code and Data for our CVPR 2021 paper "Structured Scene Memory for Vision-Language Navigation"☆39Updated 3 years ago
- Official Implementation of IVLN-CE: Iterative Vision-and-Language Navigation in Continuous Environments☆34Updated last year
- Visual Navigation with Spatial Attention☆38Updated 6 months ago
- Official implementation of History Aware Multimodal Transformer for Vision-and-Language Navigation (NeurIPS'21).☆123Updated 2 years ago
- Official implementation of KERM: Knowledge Enhanced Reasoning for Vision-and-Language Navigation (CVPR'23)☆43Updated 11 months ago
- Pytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation"☆79Updated last year
- Code for NeurIPS 2021 paper "Curriculum Learning for Vision-and-Language Navigation"☆15Updated 2 years ago
- Know What and Know Where: An Object-and-Room Informed Sequential BERT for Indoor Vision-Language Navigation☆17Updated 3 years ago
- ☆17Updated last year
- code for the paper "ADAPT: Vision-Language Navigation with Modality-Aligned Action Prompts" (CVPR 2022)☆11Updated 3 years ago
- Pytorch Code and Data for EnvEdit: Environment Editing for Vision-and-Language Navigation (CVPR 2022)☆32Updated 2 years ago
- REVERIE: Remote Embodied Visual Referring Expression in Real Indoor Environments☆129Updated last year
- PyTorch code for the paper: "Perceive, Transform, and Act: Multi-Modal Attention Networks for Vision-and-Language Navigation"☆19Updated 3 years ago
- Implementation (R2R part) for the paper "Iterative Vision-and-Language Navigation"☆17Updated last year
- ☆18Updated 2 years ago
- Official Pytorch implementation for NeurIPS 2022 paper "Weakly-Supervised Multi-Granularity Map Learning for Vision-and-Language Navigati…☆33Updated 2 years ago
- ☆36Updated 4 years ago
- Hierarchical Object-to-Zone Graph for Object Navigation (ICCV 2021)☆46Updated 2 years ago
- PyTorch Code of NAACL 2019 paper "Learning to Navigate Unseen Environments: Back Translation with Environmental Dropout"☆132Updated 3 years ago