tsinghua-fib-lab / CityEQALinks
☆23Updated 10 months ago
Alternatives and similar repositories for CityEQA
Users that are interested in CityEQA are comparing it to the libraries listed below
Sorting:
- [AAAI-25 Oral] Official Implementation of "FLAME: Learning to Navigate with Multimodal LLM in Urban Environments"☆67Updated last month
- ☆87Updated 7 months ago
- Repository for Vision-and-Language Navigation via Causal Learning (Accepted by CVPR 2024)☆94Updated 6 months ago
- Embodied Question Answering (EQA) benchmark and method (ICCV 2025)☆43Updated 4 months ago
- [ECCV 2024] Official implementation of C-Instructor: Controllable Navigation Instruction Generation with Chain of Thought Prompting☆28Updated last year
- [ACL 24] The official implementation of MapGPT: Map-Guided Prompting with Adaptive Path Planning for Vision-and-Language Navigation.☆116Updated 7 months ago
- ☆26Updated last year
- ☆20Updated last year
- Fast-Slow Test-time Adaptation for Online Vision-and-Language Navigation☆29Updated 2 weeks ago
- Code of the paper "NavCoT: Boosting LLM-Based Vision-and-Language Navigation via Learning Disentangled Reasoning" (TPAMI 2025)☆122Updated 6 months ago
- [AAAI 2024] Official implementation of NavGPT: Explicit Reasoning in Vision-and-Language Navigation with Large Language Models☆299Updated 2 years ago
- [ICCV2023] CoTDet: Affordance Knowledge Prompting for Task Driven Object Detection☆17Updated 7 months ago
- ☆164Updated 6 months ago
- [CVPR 2024] The code for paper 'Towards Learning a Generalist Model for Embodied Navigation'☆57Updated last year
- [CVPR 2024] The code for paper 'Towards Learning a Generalist Model for Embodied Navigation'☆218Updated last year
- Official implemetation of the paper "InSpire: Vision-Language-Action Models with Intrinsic Spatial Reasoning"☆46Updated 2 weeks ago
- ☆10Updated 2 years ago
- [NeurIPS 2025] DreamVLA: A Vision-Language-Action Model Dreamed with Comprehensive World Knowledge☆251Updated 3 months ago
- Official implementation of Think Global, Act Local: Dual-scale GraphTransformer for Vision-and-Language Navigation (CVPR'22 Oral).☆246Updated 2 years ago
- Codebase of ACL 2023 Findings "Aerial Vision-and-Dialog Navigation"☆59Updated last year
- [CVPR 2024] Code for HiKER-SGG: Hierarchical Knowledge Enhanced Robust Scene Graph Generation☆75Updated last year
- Official code for "Embodied-R1: Reinforced Embodied Reasoning for General Robotic Manipulation"☆111Updated 3 months ago
- [ACL'25 Oral] Code for the paper "UrbanVideo-Bench: Benchmarking Vision-Language Models on Embodied Intelligence with Video Data in Urban…☆23Updated 5 months ago
- InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policy☆312Updated this week
- [NeurIPS 2025] Official implementation of "RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics"☆209Updated this week
- [CVPR 2025] RoomTour3D - Geometry-aware, cheap and automatic data from web videos for embodied navigation☆66Updated 9 months ago
- Official implementation of Why Only Text: Empowering Vision-and-Language Navigation with Multi-modal Prompts(IJCAI 2024)☆15Updated last year
- LLaVA-VLA: A Simple Yet Powerful Vision-Language-Action Model [Actively Maintained🔥]☆173Updated last month
- [ECCV 2024] Official implementation of NavGPT-2: Unleashing Navigational Reasoning Capability for Large Vision-Language Models☆228Updated last year
- Official implementation of the paper: "StreamVLN: Streaming Vision-and-Language Navigation via SlowFast Context Modeling"☆339Updated last month