YanyuanQiao / MiCView external linksLinks
Code of the ICCV 2023 paper "March in Chat: Interactive Prompting for Remote Embodied Referring Expression"
☆26May 22, 2024Updated last year
Alternatives and similar repositories for MiC
Users that are interested in MiC are comparing it to the libraries listed below
Sorting:
- Code of the CVPR 2022 paper "HOP: History-and-Order Aware Pre-training for Vision-and-Language Navigation"☆30Aug 21, 2023Updated 2 years ago
- Official implementation of Layout-aware Dreamer for Embodied Referring Expression Grounding [AAAI 23].☆16Apr 13, 2023Updated 2 years ago
- [ACL2023] Official code repository for VLN-Trans☆14Sep 10, 2023Updated 2 years ago
- ☆10Nov 16, 2023Updated 2 years ago
- ☆24Oct 8, 2023Updated 2 years ago
- Official implementation of "Grounded Entity-Landmark Adaptive Pre-training for Vision-and-Language Navigation" (ICCV 2023 Oral)☆20Oct 21, 2023Updated 2 years ago
- Official implementation of Learning from Unlabeled 3D Environments for Vision-and-Language Navigation (ECCV'22).☆43Mar 16, 2023Updated 2 years ago
- Implementation of our ICCV 2023 paper DREAMWALKER: Mental Planning for Continuous Vision-Language Navigation☆20Jul 24, 2023Updated 2 years ago
- Official Pytorch implementation for NeurIPS 2022 paper "Weakly-Supervised Multi-Granularity Map Learning for Vision-and-Language Navigati…☆33Apr 23, 2023Updated 2 years ago
- Baseline for REVERIE-Challenge using HOP☆10Jul 4, 2022Updated 3 years ago
- [AAAI 2024] Official implementation of NavGPT: Explicit Reasoning in Vision-and-Language Navigation with Large Language Models☆314Nov 7, 2023Updated 2 years ago
- Dataset and baseline for Scenario Oriented Object Navigation (SOON)☆22Nov 23, 2021Updated 4 years ago
- Pytorch Code and Data for EnvEdit: Environment Editing for Vision-and-Language Navigation (CVPR 2022)☆30Aug 2, 2022Updated 3 years ago
- Official Implementation of Learning Navigational Visual Representations with Semantic Map Supervision (ICCV2023)☆27Jul 30, 2023Updated 2 years ago
- Official implementation of Think Global, Act Local: Dual-scale GraphTransformer for Vision-and-Language Navigation (CVPR'22 Oral).☆255Jun 27, 2023Updated 2 years ago
- Official implementation of KERM: Knowledge Enhanced Reasoning for Vision-and-Language Navigation (CVPR'23)☆45Aug 6, 2024Updated last year
- Official implementation of GridMM: Grid Memory Map for Vision-and-Language Navigation (ICCV'23).☆102Apr 18, 2024Updated last year
- REVERIE: Remote Embodied Visual Referring Expression in Real Indoor Environments☆148Feb 7, 2026Updated last week
- The open-sourced code for Learning-to-navigate-by-forgetting☆21Apr 11, 2024Updated last year
- Code for A Dual Semantic-Aware Recurrent Global-Adaptive Network For Vision-and-Language Navigation☆17Apr 25, 2024Updated last year
- Code of the CVPR 2021 Oral paper: A Recurrent Vision-and-Language BERT for Navigation☆202Aug 13, 2022Updated 3 years ago
- Code for NeurIPS 2021 paper "Curriculum Learning for Vision-and-Language Navigation"☆14Dec 13, 2022Updated 3 years ago
- ☆14Sep 21, 2022Updated 3 years ago
- ☆16Jun 12, 2024Updated last year
- Official Implementation of IVLN-CE: Iterative Vision-and-Language Navigation in Continuous Environments☆35Dec 16, 2023Updated 2 years ago
- ☆55Apr 1, 2022Updated 3 years ago
- Repository of our accepted NeurIPS-2022 paper "Towards Versatile Embodied Navigation"☆21Dec 8, 2022Updated 3 years ago
- ☆33Aug 19, 2023Updated 2 years ago
- Official implementation of History Aware Multimodal Transformer for Vision-and-Language Navigation (NeurIPS'21).☆143Jun 14, 2023Updated 2 years ago
- Official REVERIE Grounding Model of REVERIE Challenge @ CSIG 2022☆19Oct 17, 2022Updated 3 years ago
- ☆37Apr 2, 2024Updated last year
- Repository for Vision-and-Language Navigation via Causal Learning (Accepted by CVPR 2024)☆99Jun 4, 2025Updated 8 months ago
- [TPAMI 2024] Official repo of "ETPNav: Evolving Topological Planning for Vision-Language Navigation in Continuous Environments"☆416Apr 5, 2025Updated 10 months ago
- ☆23Mar 9, 2023Updated 2 years ago
- [ICCV 2025] Official implementation of SAME: Learning Generic Language-Guided Visual Navigation with State-Adaptive Mixture of Experts☆35Dec 17, 2025Updated 2 months ago
- [AAAI 2024]Weakly Supervised Multimodal Affordance Grounding for Egocentric Images☆13Nov 10, 2024Updated last year
- code for the paper "Adversarial Reinforced Instruction Attacker for Robust Vision-Language Navigation" (TPAMI 2021)☆10Jul 15, 2022Updated 3 years ago
- ☆25Jul 10, 2024Updated last year
- ☆11Jul 16, 2024Updated last year