The project repository for paper EMOS: Embodiment-aware Heterogeneous Multi-robot Operating System with LLM Agents: https://arxiv.org/abs/2410.22662
☆65Jan 6, 2025Updated last year
Alternatives and similar repositories for EMOS
Users that are interested in EMOS are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Codebase for paper: RoCo: Dialectic Multi-Robot Collaboration with Large Language Models☆239Oct 4, 2023Updated 2 years ago
- [Submitted to ICRA2025]COHERENT: Collaboration of Heterogeneous Multi-Robot System with Large Language Models☆82Jun 11, 2025Updated 9 months ago
- [ICML 2024] Learning Reward for Robot Skills Using Large Language Models via Self-Alignment☆18Aug 22, 2024Updated last year
- [AAAI 2025] Enhancing Multi-Robot Semantic Navigation Through Multimodal Chain-of-Thought Score Collaboration☆26Dec 13, 2024Updated last year
- ☆49Jan 8, 2026Updated 2 months ago
- Code for ICCV 2023 paper "Multi-Object Navigation with dynamically learned neural implicit representations"☆13Mar 20, 2024Updated 2 years ago
- ☆15Oct 10, 2024Updated last year
- ☆44Mar 10, 2022Updated 4 years ago
- [ICLR 2024] Tree-Planner: Efficient Close-loop Task Planning with Large Language Models☆19Jan 4, 2026Updated 2 months ago
- A new zero-shot framework to explore and search for the language descriptive targets in unknown environment based on Large Vision Languag…☆58Nov 28, 2024Updated last year
- ☆10Nov 16, 2023Updated 2 years ago
- ☆56Oct 3, 2024Updated last year
- [NeurIPS'2025] "OWMM-Agent: Open World Mobile Manipulation With Multi-modal Agentic Data Synthesis"☆28Dec 4, 2025Updated 3 months ago
- Codebase for LangNav paper☆19Jun 13, 2024Updated last year
- A repository accompanying the PARTNR benchmark for using Large Planning Models (LPMs) to solve Human-Robot Collaboration or Robot Instruc…☆355Feb 5, 2026Updated last month
- ☆131Jul 9, 2024Updated last year
- ☆13Sep 19, 2023Updated 2 years ago
- Public release for "Explore until Confident: Efficient Exploration for Embodied Question Answering"☆77Jul 5, 2024Updated last year
- SZU-srun-login-script☆23Mar 2, 2025Updated last year
- We proposed to explore and search for the target in unknown environment based on Large Language Model for multi-robot system.☆100Jun 30, 2024Updated last year
- ☆632Mar 25, 2023Updated 2 years ago
- Official implementation of LLM+MAP: Bimanual Robot Task Planning using Large Language Models (LLMs) and Planning Domain Definition Langua…☆20Mar 24, 2025Updated 11 months ago
- ☆17Feb 12, 2025Updated last year
- MOKA: Open-World Robotic Manipulation through Mark-based Visual Prompting (RSS 2024)☆95Jul 16, 2024Updated last year
- Open Vocabulary Object Navigation☆121May 15, 2025Updated 10 months ago
- ☆18May 28, 2024Updated last year
- Object-Aware Guidance for Autonomous Scene Reconstruction☆17Aug 19, 2018Updated 7 years ago
- Imagine Before Go: Self-Supervised Generative Map for Object Goal Navigation (CVPR2024)☆56Mar 27, 2025Updated 11 months ago
- SERL: A Software Suite for Sample-Efficient Robotic Reinforcement Learning☆28Aug 28, 2025Updated 6 months ago
- [RSS2024] Official implementation of "Hierarchical Open-Vocabulary 3D Scene Graphs for Language-Grounded Robot Navigation"☆444Jan 19, 2026Updated 2 months ago
- Official Hardware Codebase for the Paper "BEHAVIOR Robot Suite: Streamlining Real-World Whole-Body Manipulation for Everyday Household Ac…☆138Nov 18, 2025Updated 4 months ago
- ☆27Feb 20, 2025Updated last year
- Official repository of General Scene Adaptation for Vision-and-Language Navigation (ICLR'2025)☆67Apr 16, 2025Updated 11 months ago
- ☆21Apr 12, 2024Updated last year
- ☆34Apr 1, 2024Updated last year
- [ICLR 2024] Source codes for the paper "Building Cooperative Embodied Agents Modularly with Large Language Models"☆294Mar 30, 2025Updated 11 months ago
- ☆17Dec 21, 2020Updated 5 years ago
- [CoRL 2020] Learning a natural-language to LTL executable semantic parser for grounded robotics☆16Jul 31, 2022Updated 3 years ago
- ☆29May 21, 2025Updated 10 months ago