ChanganVR / awesome-embodied-visionLinks
Reading list for research topics in embodied vision
☆658Updated 2 months ago
Alternatives and similar repositories for awesome-embodied-vision
Users that are interested in awesome-embodied-vision are comparing it to the libraries listed below
Sorting:
- A curated list for vision-and-language navigation. ACL 2022 paper "Vision-and-Language Navigation: A Survey of Tasks, Methods, and Future…☆533Updated last year
- Vision-and-Language Navigation in Continuous Environments using Habitat☆534Updated 7 months ago
- Ideas and thoughts about the fascinating Vision-and-Language Navigation☆249Updated 2 years ago
- A curated list of research papers in Vision-Language Navigation (VLN)☆219Updated last year
- Pytorch code for NeurIPS-20 Paper "Object Goal Navigation using Goal-Oriented Semantic Exploration"☆406Updated 2 years ago
- Paper list in the survey paper: Toward General-Purpose Robots via Foundation Models: A Survey and Meta-Analysis☆442Updated 7 months ago
- A curated list of awesome papers on Embodied AI and related research/industry-driven resources.☆464Updated 2 months ago
- BEHAVIOR-1K: a platform for accelerating Embodied AI research. Join our Discord for support: https://discord.gg/bccR5vGFEx☆741Updated last week
- [ICRA2023] Implementation of Visual Language Maps for Robot Navigation☆547Updated last year
- ☆494Updated 2 years ago
- ☆115Updated last year
- ☆245Updated 7 months ago
- AI Research Platform for Reinforcement Learning from Real Panoramic Images.☆613Updated last year
- Code of the CVPR 2021 Oral paper: A Recurrent Vision-and-Language BERT for Navigation☆185Updated 3 years ago
- Mobile manipulation research tools for roboticists☆1,114Updated last year
- Official implementation of Think Global, Act Local: Dual-scale GraphTransformer for Vision-and-Language Navigation (CVPR'22 Oral).☆203Updated 2 years ago
- CALVIN - A benchmark for Language-Conditioned Policy Learning for Long-Horizon Robot Manipulation Tasks☆656Updated last month
- [CVPR 2024] The code for paper 'Towards Learning a Generalist Model for Embodied Navigation'☆202Updated last year
- Official code and checkpoint release for mobile robot foundation models: GNM, ViNT, and NoMaD.☆953Updated 11 months ago
- The repository provides code associated with the paper VLFM: Vision-Language Frontier Maps for Zero-Shot Semantic Navigation (ICRA 2024)☆545Updated 7 months ago
- ZSON: Zero-Shot Object-Goal Navigation using Multimodal Goal Embeddings. NeurIPS 2022☆83Updated 2 years ago
- ☆296Updated 4 months ago
- A curated list of 3D Vision papers relating to Robotics domain in the era of large models i.e. LLMs/VLMs, inspired by awesome-computer-vi…☆752Updated last month
- Official implementation of History Aware Multimodal Transformer for Vision-and-Language Navigation (NeurIPS'21).☆125Updated 2 years ago
- [AAAI 2024] Official implementation of NavGPT: Explicit Reasoning in Vision-and-Language Navigation with Large Language Models☆264Updated last year
- Official code release for ConceptGraphs☆655Updated 7 months ago
- Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Success☆627Updated 4 months ago
- Code for the habitat challenge☆336Updated 2 years ago
- A curated list of state-of-the-art research in embodied AI, focusing on vision-language-action (VLA) models, vision-language navigation (…☆1,424Updated this week
- ICRA2024 Paper List☆566Updated 11 months ago