PyTorch code for the paper: "Perceive, Transform, and Act: Multi-Modal Attention Networks for Vision-and-Language Navigation"
☆19Aug 5, 2021Updated 4 years ago
Alternatives and similar repositories for perceive-transform-and-act
Users that are interested in perceive-transform-and-act are comparing it to the libraries listed below
Sorting:
- PyTorch code for BMVC 2019 paper: Embodied Vision-and-Language Navigation with Dynamic Convolutional Filters☆20Jan 4, 2023Updated 3 years ago
- The repository of ECCV 2020 paper `Active Visual Information Gathering for Vision-Language Navigation`☆44Apr 9, 2022Updated 3 years ago
- Code for "Tactical Rewind: Self-Correction via Backtracking in Vision-and-Language Navigation"☆62Sep 24, 2019Updated 6 years ago
- Code of the NeurIPS 2021 paper: Language and Visual Entity Relationship Graph for Agent Navigation☆46Oct 31, 2021Updated 4 years ago
- ☆13Dec 12, 2022Updated 3 years ago
- TopViewRS: Vision-Language Models as Top-View Spatial Reasoners (EMNLP 2024 Oral)☆15Jun 14, 2025Updated 8 months ago
- Code for "Counterfactual Variable Control for Robust and Interpretable Question Answering"☆14Oct 13, 2020Updated 5 years ago
- large scale pretrain for navigation task☆94Mar 2, 2023Updated 3 years ago
- SelfCriticalSequenceTrainingforImageCaptioning☆21May 27, 2017Updated 8 years ago
- Pytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation"☆88Jun 27, 2024Updated last year
- PyTorch code for ICLR 2019 paper: Self-Monitoring Navigation Agent via Auxiliary Progress Estimation☆122Oct 3, 2023Updated 2 years ago
- REVERIE: Remote Embodied Visual Referring Expression in Real Indoor Environments☆148Feb 26, 2026Updated last week
- Code and data of the Fine-Grained R2R Dataset proposed in the EMNLP 2021 paper Sub-Instruction Aware Vision-and-Language Navigation☆56Oct 26, 2021Updated 4 years ago
- code of the paper "Vision-Language Navigation with Multi-granularity Observation and Auxiliary Reasoning Tasks"☆23Mar 23, 2021Updated 4 years ago
- Planning as In-Painting: A Diffusion-Based Embodied Task Planning Framework for Environments under Uncertainty☆21Dec 11, 2023Updated 2 years ago
- A simple but well-performing "single-hop" visual attention model for the GQA dataset☆20Aug 8, 2019Updated 6 years ago
- Rethinking Diversified and Discriminative Proposal Generation for Visual Grounding☆23Jun 27, 2018Updated 7 years ago
- PyTorch implementation of "Vision-Dialog Navigation by Exploring Cross-modal Memory", CVPR 2020.☆19Nov 22, 2022Updated 3 years ago
- Inferring and Executing Programs for Visual Reasoning☆21Jan 4, 2019Updated 7 years ago
- Code for the paper "Improving Vision-and-Language Navigation with Image-Text Pairs from the Web" (ECCV 2020)☆59Oct 7, 2022Updated 3 years ago
- [ECCV 2024] Official implementation of C-Instructor: Controllable Navigation Instruction Generation with Chain of Thought Prompting☆29Dec 16, 2024Updated last year
- AAAI2020-The official implementation of "Learning Cross-modal Context Graph for Visual Grounding"☆58Oct 25, 2021Updated 4 years ago
- Official Pytorch implementation for NeurIPS 2022 paper "Weakly-Supervised Multi-Granularity Map Learning for Vision-and-Language Navigati…☆33Apr 23, 2023Updated 2 years ago
- Repository for the paper: Teaching VLMs to Localize Specific Objects from In-context Examples☆40Nov 27, 2024Updated last year
- Code of the CVPR 2022 paper "HOP: History-and-Order Aware Pre-training for Vision-and-Language Navigation"☆30Aug 21, 2023Updated 2 years ago
- Bottom-up features extractor implemented in PyTorch.☆72Dec 5, 2019Updated 6 years ago