The repository provides code associated with the paper VLFM: Vision-Language Frontier Maps for Zero-Shot Semantic Navigation (ICRA 2024)
☆694Nov 12, 2025Updated 3 months ago
Alternatives and similar repositories for vlfm
Users that are interested in vlfm are comparing it to the libraries listed below
Sorting:
- Open Vocabulary Object Navigation☆117May 15, 2025Updated 9 months ago
- [NeurIPS 2024] SG-Nav: Online 3D Scene Graph Prompting for LLM-based Zero-shot Object Navigation☆319Sep 16, 2025Updated 5 months ago
- [CVPR 2025] UniGoal: Towards Universal Zero-shot Goal-oriented Navigation☆303Sep 16, 2025Updated 5 months ago
- [RSS2024] Official implementation of "Hierarchical Open-Vocabulary 3D Scene Graphs for Language-Grounded Robot Navigation"☆435Jan 19, 2026Updated last month
- [ICRA2023] Implementation of Visual Language Maps for Robot Navigation☆653Jul 9, 2024Updated last year
- Official code and checkpoint release for mobile robot foundation models: GNM, ViNT, and NoMaD.☆1,150Sep 15, 2024Updated last year
- [RA-L'25] An Reliable and Efficient Framework for Zero-Shot Object Navigation☆305Feb 10, 2026Updated 3 weeks ago
- ☆193Mar 29, 2025Updated 11 months ago
- Official GitHub Repository for Paper "Bridging Zero-shot Object Navigation and Foundation Models through Pixel-Guided Navigation Skill", …☆130Oct 30, 2024Updated last year
- [ICRA'25] One Map to Find Them All: Real-time Open-Vocabulary Mapping for Zero-shot Multi-Object Navigation☆136Oct 28, 2025Updated 4 months ago
- Official implementation of OpenFMNav: Towards Open-Set Zero-Shot Object Navigation via Vision-Language Foundation Models