raphael-sch / VELMALinks
VELMA agent for VLN in Street View
☆25Updated last year
Alternatives and similar repositories for VELMA
Users that are interested in VELMA are comparing it to the libraries listed below
Sorting:
- Code of the paper "NavCoT: Boosting LLM-Based Vision-and-Language Navigation via Learning Disentangled Reasoning" (TPAMI 2025)☆97Updated 3 months ago
- Official implementation of Think Global, Act Local: Dual-scale GraphTransformer for Vision-and-Language Navigation (CVPR'22 Oral).☆203Updated 2 years ago
- Official GitHub Repository for Paper "Bridging Zero-shot Object Navigation and Foundation Models through Pixel-Guided Navigation Skill", …☆115Updated 10 months ago
- ☆35Updated last year
- [AAAI-25 Oral] Official Implementation of "FLAME: Learning to Navigate with Multimodal LLM in Urban Environments"☆59Updated 6 months ago
- ☆105Updated last year
- [AAAI 2024] Official implementation of NavGPT: Explicit Reasoning in Vision-and-Language Navigation with Large Language Models☆264Updated last year
- ☆26Updated 2 months ago
- [ACL 24] The official implementation of MapGPT: Map-Guided Prompting with Adaptive Path Planning for Vision-and-Language Navigation.☆101Updated 4 months ago
- [CVPR 2024] The code for paper 'Towards Learning a Generalist Model for Embodied Navigation'☆205Updated last year
- [CVPR 2024] The code for paper 'Towards Learning a Generalist Model for Embodied Navigation'☆48Updated last year
- [ICCV'23] Learning Vision-and-Language Navigation from YouTube Videos☆60Updated 8 months ago
- Repository for Vision-and-Language Navigation via Causal Learning (Accepted by CVPR 2024)☆81Updated 3 months ago
- [TMLR 2024] repository for VLN with foundation models☆160Updated last month
- ZSON: Zero-Shot Object-Goal Navigation using Multimodal Goal Embeddings. NeurIPS 2022☆85Updated 2 years ago
- [ECCV 2024] Official implementation of NavGPT-2: Unleashing Navigational Reasoning Capability for Large Vision-Language Models☆196Updated 11 months ago
- ☆32Updated 2 years ago
- Official implementation of Why Only Text: Empowering Vision-and-Language Navigation with Multi-modal Prompts(IJCAI 2024)☆14Updated 10 months ago
- ☆163Updated 5 months ago
- [ICCV 2023] PEANUT: Predicting and Navigating to Unseen Targets☆50Updated last year
- Aligning Knowledge Graph with Visual Perception for Object-goal Navigation (ICRA 2024)☆37Updated 5 months ago
- Code for LGX (Language Guided Exploration). We use LLMs to perform embodied robot navigation in a zero-shot manner.☆64Updated last year
- [CVPR 2023] CoWs on Pasture: Baselines and Benchmarks for Language-Driven Zero-Shot Object Navigation☆139Updated last year
- ☆115Updated last year
- Code of the paper "Correctable Landmark Discovery via Large Models for Vision-Language Navigation" (TPAMI 2024)☆14Updated last year
- Public release for "Explore until Confident: Efficient Exploration for Embodied Question Answering"☆64Updated last year
- Official implementation of GridMM: Grid Memory Map for Vision-and-Language Navigation (ICCV'23).☆94Updated last year
- official implementation of NeurIPS 2023 paper "FGPrompt: Fine-grained Goal Prompting for Image-goal Navigation"☆33Updated last year
- ☆81Updated 3 months ago
- Open Vocabulary Object Navigation☆87Updated 3 months ago