tsinghua-fib-lab / CityEQALinks
☆23Updated 7 months ago
Alternatives and similar repositories for CityEQA
Users that are interested in CityEQA are comparing it to the libraries listed below
Sorting:
- [AAAI-25 Oral] Official Implementation of "FLAME: Learning to Navigate with Multimodal LLM in Urban Environments"☆62Updated 3 weeks ago
- Repository for Vision-and-Language Navigation via Causal Learning (Accepted by CVPR 2024)☆88Updated 4 months ago
- ☆83Updated 5 months ago
- Embodied Question Answering (EQA) benchmark and method (ICCV 2025)☆38Updated 2 months ago
- [ECCV 2024] Official implementation of C-Instructor: Controllable Navigation Instruction Generation with Chain of Thought Prompting☆25Updated 10 months ago
- ☆21Updated last year
- Code of the paper "NavCoT: Boosting LLM-Based Vision-and-Language Navigation via Learning Disentangled Reasoning" (TPAMI 2025)☆103Updated 4 months ago
- [AAAI 2024] Official implementation of NavGPT: Explicit Reasoning in Vision-and-Language Navigation with Large Language Models☆276Updated last year
- ☆18Updated last year
- [ACL 24] The official implementation of MapGPT: Map-Guided Prompting with Adaptive Path Planning for Vision-and-Language Navigation.☆111Updated 5 months ago
- Fast-Slow Test-time Adaptation for Online Vision-and-Language Navigation☆25Updated last year
- ☆10Updated last year
- [CVPR 2024] The code for paper 'Towards Learning a Generalist Model for Embodied Navigation'☆53Updated last year
- Code of the paper "Correctable Landmark Discovery via Large Models for Vision-Language Navigation" (TPAMI 2024)☆14Updated last year
- Codebase of ACL 2023 Findings "Aerial Vision-and-Dialog Navigation"☆55Updated 11 months ago
- [ICCV2023] CoTDet: Affordance Knowledge Prompting for Task Driven Object Detection☆17Updated 5 months ago
- Official implementation of the paper: "StreamVLN: Streaming Vision-and-Language Navigation via SlowFast Context Modeling"☆246Updated 2 weeks ago
- Official implementation of Think Global, Act Local: Dual-scale GraphTransformer for Vision-and-Language Navigation (CVPR'22 Oral).☆217Updated 2 years ago
- [CVPR 2024] The code for paper 'Towards Learning a Generalist Model for Embodied Navigation'☆207Updated last year
- Official implemetation of the paper "InSpire: Vision-Language-Action Models with Intrinsic Spatial Reasoning"☆44Updated 2 weeks ago
- ☆146Updated 4 months ago
- ☆29Updated 11 months ago
- The official repository of [CVPR2025] DSPNet: Dual-vision Scene Perception for Robust 3D Question Answering☆24Updated 6 months ago
- [NeurIP S2025] DreamVLA: A Vision-Language-Action Model Dreamed with Comprehensive World Knowledge☆190Updated last month
- Training code of waypoint predictor in Discrete-to-Continuous VLN.☆25Updated last year
- [NeurIPS 2025]⭐️ Reason-RFT: Reinforcement Fine-Tuning for Visual Reasoning.☆218Updated 2 weeks ago
- [ECCV 2024] Official implementation of NavGPT-2: Unleashing Navigational Reasoning Capability for Large Vision-Language Models☆207Updated last year
- Official implementation of Why Only Text: Empowering Vision-and-Language Navigation with Multi-modal Prompts(IJCAI 2024)☆14Updated last year
- ☆59Updated 6 months ago
- [NeurIPS 2025] Official implementation of "RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics"☆173Updated 2 weeks ago