SgtVincent / EMOS
The project repository for paper EMOS: Embodiment-aware Heterogeneous Multi-robot Operating System with LLM Agents: https://arxiv.org/abs/2410.22662
☆19Updated last month
Alternatives and similar repositories for EMOS:
Users that are interested in EMOS are comparing it to the libraries listed below
- Public release for "Explore until Confident: Efficient Exploration for Embodied Question Answering"☆42Updated 7 months ago
- [CoRL 2023] REFLECT: Summarizing Robot Experiences for Failure Explanation and Correction☆83Updated 11 months ago
- Official code for the paper: Embodied Multi-Modal Agent trained by an LLM from a Parallel TextWorld☆52Updated 4 months ago
- Find What You Want: Learning Demand-conditioned Object Attribute Space for Demand-driven Navigation☆53Updated last month
- Enhancing LLM/VLM capability for robot task and motion planning with extra algorithm based tools.☆58Updated 4 months ago
- Public release for "Distillation and Retrieving Generalizable Knowledge for Robot Manipulation via Language Corrections"☆43Updated 8 months ago
- ProgPrompt for Virtualhome☆126Updated last year
- ☆27Updated 10 months ago
- Official repo of VLABench, a large scale benchmark designed for fairly evaluating VLA, Embodied Agent, and VLMs.☆120Updated last week
- Official implementation of RAM: Retrieval-Based Affordance Transfer for Generalizable Zero-Shot Robotic Manipulation☆75Updated last month
- Official GitHub Repository for Paper "Bridging Zero-shot Object Navigation and Foundation Models through Pixel-Guided Navigation Skill", …☆84Updated 3 months ago
- Code for Reinforcement Learning from Vision Language Foundation Model Feedback☆79Updated 8 months ago
- The official codebase for ManipLLM: Embodied Multimodal Large Language Model for Object-Centric Robotic Manipulation(cvpr 2024)☆110Updated 7 months ago
- Leveraging Large Language Models for Visual Target Navigation☆100Updated last year
- [ICLR 2024] PyTorch Code for Plan-Seq-Learn: Language Model Guided RL for Solving Long Horizon Robotics Tasks☆87Updated 6 months ago
- Code for LGX (Language Guided Exploration). We use LLMs to perform embodied robot navigation in a zero-shot manner.☆56Updated last year
- ☆79Updated last year
- PoliFormer: Scaling On-Policy RL with Transformers Results in Masterful Navigators☆64Updated 3 months ago
- [CoRL 2024] RoboEXP: Action-Conditioned Scene Graph via Interactive Exploration for Robotic Manipulation☆92Updated 4 months ago
- Code for ICRA24 paper "Think, Act, and Ask: Open-World Interactive Personalized Robot Navigation" Paper//arxiv.org/abs/2310.07968 …☆25Updated 8 months ago
- ☆101Updated 3 months ago
- [NeurIPS 2024] CLOVER: Closed-Loop Visuomotor Control with Generative Expectation for Robotic Manipulation☆95Updated 2 months ago
- [IROS24 Oral]ManipVQA: Injecting Robotic Affordance and Physically Grounded Information into Multi-Modal Large Language Models☆84Updated 5 months ago
- SPOC: Imitating Shortest Paths in Simulation Enables Effective Navigation and Manipulation in the Real World☆105Updated 3 months ago
- A simple testbed for robotics manipulation policies☆75Updated this week
- ☆29Updated 5 months ago
- Code for training embodied agents using IL and RL finetuning at scale for ObjectNav☆63Updated last year
- LLM multi-agent discussion framework for multi-agent/robot situations.☆28Updated 4 months ago
- MOKA: Open-World Robotic Manipulation through Mark-based Visual Prompting (RSS 2024)☆70Updated 7 months ago