Rongtao-Xu / Awesome-LLM-EN
☆98Updated last year
Alternatives and similar repositories for Awesome-LLM-EN:
Users that are interested in Awesome-LLM-EN are comparing it to the libraries listed below
- Leveraging Large Language Models for Visual Target Navigation☆101Updated last year
- ☆104Updated 4 months ago
- Official GitHub Repository for Paper "Bridging Zero-shot Object Navigation and Foundation Models through Pixel-Guided Navigation Skill", …☆85Updated 4 months ago
- ☆75Updated 7 months ago
- [AAAI 2024] Official implementation of NavGPT: Explicit Reasoning in Vision-and-Language Navigation with Large Language Models☆195Updated last year
- [ECCV 2024] Official implementation of NavGPT-2: Unleashing Navigational Reasoning Capability for Large Vision-Language Models☆119Updated 5 months ago
- Open Vocabulary Object Navigation☆56Updated last week
- ZSON: Zero-Shot Object-Goal Navigation using Multimodal Goal Embeddings. NeurIPS 2022☆67Updated 2 years ago
- [CVPR 2023] CoWs on Pasture: Baselines and Benchmarks for Language-Driven Zero-Shot Object Navigation☆118Updated last year
- [CVPR 2023] We propose a framework for the challenging 3D-aware ObjectNav based on two straightforward sub-policies. The two sub-polices,…☆66Updated 9 months ago
- Code for LGX (Language Guided Exploration). We use LLMs to perform embodied robot navigation in a zero-shot manner.☆58Updated last year
- Code of the paper "NavCoT: Boosting LLM-Based Vision-and-Language Navigation via Learning Disentangled Reasoning"☆39Updated 10 months ago
- Find What You Want: Learning Demand-conditioned Object Attribute Space for Demand-driven Navigation☆54Updated last month
- Language-Grounded Dynamic Scene Graphs for Interactive Object Search with Mobile Manipulation. Project website: http://moma-llm.cs.uni-fr…☆66Updated 7 months ago
- The repository provides code associated with the paper VLFM: Vision-Language Frontier Maps for Zero-Shot Semantic Navigation (ICRA 2024)☆321Updated last month
- Official implementation of Think Global, Act Local: Dual-scale GraphTransformer for Vision-and-Language Navigation (CVPR'22 Oral).☆149Updated last year
- [CVPR 2024] The code for paper 'Towards Learning a Generalist Model for Embodied Navigation'☆160Updated 8 months ago
- Public release for "Explore until Confident: Efficient Exploration for Embodied Question Answering"☆43Updated 7 months ago
- Awesome Embodied Navigation: Concept, Paradigm and State-of-the-arts☆101Updated 3 months ago
- Official implementation of GridMM: Grid Memory Map for Vision-and-Language Navigation (ICCV'23).☆77Updated 10 months ago
- [RSS 2024] NaVid: Video-based VLM Plans the Next Step for Vision-and-Language Navigation☆83Updated 3 weeks ago
- SPOC: Imitating Shortest Paths in Simulation Enables Effective Navigation and Manipulation in the Real World☆108Updated 4 months ago
- ☆22Updated 11 months ago
- Official code release for "Navigation with Large Language Models: Semantic Guesswork as a Heuristic for Planning"☆45Updated last year
- Vision-Language Navigation Benchmark in Isaac Lab☆92Updated 2 months ago
- 这个文档是使用Habitat-sim的中文教程☆38Updated last year
- PONI: Potential Functions for ObjectGoal Navigation with Interaction-free Learning. CVPR 2022 (Oral).☆90Updated 2 years ago
- PoliFormer: Scaling On-Policy RL with Transformers Results in Masterful Navigators☆65Updated 3 months ago
- Code for ICRA24 paper "Think, Act, and Ask: Open-World Interactive Personalized Robot Navigation" Paper//arxiv.org/abs/2310.07968 …☆27Updated 8 months ago
- ☆221Updated last month