Stanford-ILIAD / explore-eqaLinks
Public release for "Explore until Confident: Efficient Exploration for Embodied Question Answering"
☆64Updated last year
Alternatives and similar repositories for explore-eqa
Users that are interested in explore-eqa are comparing it to the libraries listed below
Sorting:
- Find What You Want: Learning Demand-conditioned Object Attribute Space for Demand-driven Navigation☆60Updated 7 months ago
- ☆105Updated last year
- ZSON: Zero-Shot Object-Goal Navigation using Multimodal Goal Embeddings. NeurIPS 2022☆83Updated 2 years ago
- Code for ICRA24 paper "Think, Act, and Ask: Open-World Interactive Personalized Robot Navigation" Paper//arxiv.org/abs/2310.07968 …☆31Updated last year
- SPOC: Imitating Shortest Paths in Simulation Enables Effective Navigation and Manipulation in the Real World☆131Updated 9 months ago
- ☆42Updated last year
- Open Vocabulary Object Navigation☆86Updated 3 months ago
- ☆161Updated 5 months ago
- python tools to work with habitat-sim environment.☆30Updated last year
- [CVPR 2023] CoWs on Pasture: Baselines and Benchmarks for Language-Driven Zero-Shot Object Navigation☆139Updated last year
- Official implementation of OpenFMNav: Towards Open-Set Zero-Shot Object Navigation via Vision-Language Foundation Models☆52Updated 11 months ago
- Code repository for the Habitat Synthetic Scenes Dataset (HSSD) paper.☆100Updated last year
- ☆34Updated last year
- [ICCV'23] Learning Vision-and-Language Navigation from YouTube Videos☆60Updated 8 months ago
- The project repository for paper EMOS: Embodiment-aware Heterogeneous Multi-robot Operating System with LLM Agents: https://arxiv.org/abs…☆46Updated 7 months ago
- [CoRL2024] Official repo of `A3VLM: Actionable Articulation-Aware Vision Language Model`☆117Updated 10 months ago
- Code of the paper "NavCoT: Boosting LLM-Based Vision-and-Language Navigation via Learning Disentangled Reasoning" (TPAMI 2025)☆96Updated 2 months ago
- Official GitHub Repository for Paper "Bridging Zero-shot Object Navigation and Foundation Models through Pixel-Guided Navigation Skill", …☆115Updated 10 months ago
- Code for training embodied agents using IL and RL finetuning at scale for ObjectNav☆77Updated 4 months ago
- Code and Data of the CVPR 2022 paper: Bridging the Gap Between Learning in Discrete and Continuous Environments for Vision-and-Language N…☆130Updated last year
- official implementation for ECCV 2024 paper "Prioritized Semantic Learning for Zero-shot Instance Navigation"☆42Updated 2 months ago
- Official implementation of Lookahead Exploration with Neural Radiance Representation for Continuous Vision-Language Navigation (CVPR'24 H…☆89Updated 4 months ago
- PoliFormer: Scaling On-Policy RL with Transformers Results in Masterful Navigators☆92Updated 9 months ago
- PONI: Potential Functions for ObjectGoal Navigation with Interaction-free Learning. CVPR 2022 (Oral).☆105Updated 2 years ago
- Official implementation of GridMM: Grid Memory Map for Vision-and-Language Navigation (ICCV'23).☆93Updated last year
- [CVPR 2023] We propose a framework for the challenging 3D-aware ObjectNav based on two straightforward sub-policies. The two sub-polices,…☆77Updated last year
- ☆13Updated 6 months ago
- [CVPR 2024] The code for paper 'Towards Learning a Generalist Model for Embodied Navigation'☆202Updated last year
- [CVPR 2025] RoomTour3D - Geometry-aware, cheap and automatic data from web videos for embodied navigation☆55Updated 5 months ago
- Leveraging Large Language Models for Visual Target Navigation☆132Updated last year