ChanganVR / awesome-embodied-vision
Reading list for research topics in embodied vision
☆611Updated 3 months ago
Alternatives and similar repositories for awesome-embodied-vision
Users that are interested in awesome-embodied-vision are comparing it to the libraries listed below
Sorting:
- A curated list for vision-and-language navigation. ACL 2022 paper "Vision-and-Language Navigation: A Survey of Tasks, Methods, and Future…☆486Updated last year
- A curated list of research papers in Vision-Language Navigation (VLN)☆210Updated last year
- Vision-and-Language Navigation in Continuous Environments using Habitat☆432Updated 4 months ago
- Ideas and thoughts about the fascinating Vision-and-Language Navigation☆223Updated last year
- Pytorch code for NeurIPS-20 Paper "Object Goal Navigation using Goal-Oriented Semantic Exploration"☆375Updated last year
- Official code release for ConceptGraphs☆590Updated 4 months ago
- OmniGibson: a platform for accelerating Embodied AI research built upon NVIDIA's Omniverse engine. Join our Discord for support: https://…☆690Updated this week
- A curated list of awesome papers on Embodied AI and related research/industry-driven resources.☆431Updated last month
- Paper list in the survey paper: Toward General-Purpose Robots via Foundation Models: A Survey and Meta-Analysis☆429Updated 3 months ago
- ☆229Updated 4 months ago
- [AAAI 2024] Official implementation of NavGPT: Explicit Reasoning in Vision-and-Language Navigation with Large Language Models☆233Updated last year
- Code of the CVPR 2021 Oral paper: A Recurrent Vision-and-Language BERT for Navigation☆178Updated 2 years ago
- Code for the habitat challenge☆329Updated 2 years ago
- An open source framework for research in Embodied-AI from AI2.☆346Updated 4 months ago
- [ICML 2024] Official code repository for 3D embodied generalist agent LEO☆437Updated 3 weeks ago
- ☆106Updated last year
- [ICRA2023] Implementation of Visual Language Maps for Robot Navigation☆475Updated 10 months ago
- ALFRED - A Benchmark for Interpreting Grounded Instructions for Everyday Tasks☆417Updated 3 weeks ago
- Official implementation of Think Global, Act Local: Dual-scale GraphTransformer for Vision-and-Language Navigation (CVPR'22 Oral).☆182Updated last year
- The repository provides code associated with the paper VLFM: Vision-Language Frontier Maps for Zero-Shot Semantic Navigation (ICRA 2024)☆449Updated 4 months ago
- A curated list of state-of-the-art research in embodied AI, focusing on vision-language-action (VLA) models, vision-language navigation (…☆625Updated this week
- [CVPR 2024 & NeurIPS 2024] EmbodiedScan: A Holistic Multi-Modal 3D Perception Suite Towards Embodied AI☆591Updated 2 months ago
- Official code and checkpoint release for mobile robot foundation models: GNM, ViNT, and NoMaD.☆826Updated 8 months ago
- Official codebase for EmbCLIP☆125Updated last year
- A curated list of 3D Vision papers relating to Robotics domain in the era of large models i.e. LLMs/VLMs, inspired by awesome-computer-vi…☆699Updated 6 months ago
- Mobile manipulation research tools for roboticists☆1,072Updated 11 months ago
- [CVPR 2024] The code for paper 'Towards Learning a Generalist Model for Embodied Navigation'☆184Updated 10 months ago
- PONI: Potential Functions for ObjectGoal Navigation with Interaction-free Learning. CVPR 2022 (Oral).☆99Updated 2 years ago
- A Simulation Environment to train Robots in Large Realistic Interactive Scenes☆732Updated 10 months ago
- CALVIN - A benchmark for Language-Conditioned Policy Learning for Long-Horizon Robot Manipulation Tasks☆557Updated 3 months ago