sx-zhang / Layout-based-sTDELinks
Layout-based Causal Inference for Object Navigation (CVPR 2023)
☆28Updated 3 years ago
Alternatives and similar repositories for Layout-based-sTDE
Users that are interested in Layout-based-sTDE are comparing it to the libraries listed below
Sorting:
- GMAN: Generative Meta-Adversarial Network for Unseen Object Navigation (ECCV 2022)☆23Updated last year
- Hierarchical Object-to-Zone Graph for Object Navigation (ICCV 2021)☆49Updated 3 years ago
- ☆18Updated 4 years ago
- Reading list for research topics in embodied vision☆11Updated 4 years ago
- Unbiased Directed Object Attention Graph for Object Navigation☆16Updated 3 years ago
- Implementation (R2R part) for the paper "Iterative Vision-and-Language Navigation"☆18Updated last year
- Official Pytorch implementation for NeurIPS 2022 paper "Weakly-Supervised Multi-Granularity Map Learning for Vision-and-Language Navigati…☆33Updated 2 years ago
- ☆38Updated 3 years ago
- ☆55Updated 3 years ago
- Dual Adaptive Thinking (DAT) for object navigation☆13Updated 3 years ago
- Code for A Dual Semantic-Aware Recurrent Global-Adaptive Network For Vision-and-Language Navigation☆17Updated last year
- Code of the ICCV 2023 paper "March in Chat: Interactive Prompting for Remote Embodied Referring Expression"☆26Updated last year
- Official implementation of KERM: Knowledge Enhanced Reasoning for Vision-and-Language Navigation (CVPR'23)☆44Updated last year
- Training code of waypoint predictor in Discrete-to-Continuous VLN.☆27Updated last year
- ZSON: Zero-Shot Object-Goal Navigation using Multimodal Goal Embeddings. NeurIPS 2022☆96Updated 2 years ago
- Code and Data of the CVPR 2022 paper: Bridging the Gap Between Learning in Discrete and Continuous Environments for Vision-and-Language N…☆141Updated 2 years ago
- Official implementation of Think Global, Act Local: Dual-scale GraphTransformer for Vision-and-Language Navigation (CVPR'22 Oral).☆248Updated 2 years ago
- Official implementation of History Aware Multimodal Transformer for Vision-and-Language Navigation (NeurIPS'21).