SgtVincent / EMOSLinks
The project repository for paper EMOS: Embodiment-aware Heterogeneous Multi-robot Operating System with LLM Agents: https://arxiv.org/abs/2410.22662
☆46Updated 7 months ago
Alternatives and similar repositories for EMOS
Users that are interested in EMOS are comparing it to the libraries listed below
Sorting:
- Public release for "Explore until Confident: Efficient Exploration for Embodied Question Answering"☆63Updated last year
- SPOC: Imitating Shortest Paths in Simulation Enables Effective Navigation and Manipulation in the Real World☆130Updated 9 months ago
- Find What You Want: Learning Demand-conditioned Object Attribute Space for Demand-driven Navigation☆61Updated 6 months ago
- [NeurIPS 2024] PIVOT-R: Primitive-Driven Waypoint-Aware World Model for Robotic Manipulation☆41Updated 9 months ago
- Code for ICRA24 paper "Think, Act, and Ask: Open-World Interactive Personalized Robot Navigation" Paper//arxiv.org/abs/2310.07968 …☆31Updated last year
- [IROS24 Oral]ManipVQA: Injecting Robotic Affordance and Physically Grounded Information into Multi-Modal Large Language Models☆98Updated 11 months ago
- Official repo of VLABench, a large scale benchmark designed for fairly evaluating VLA, Embodied Agent, and VLMs.☆269Updated last month
- [CoRL 2023] REFLECT: Summarizing Robot Experiences for Failure Explanation and Correction☆97Updated last year
- ☆159Updated 4 months ago
- Official implementation of "OneTwoVLA: A Unified Vision-Language-Action Model with Adaptive Reasoning"☆163Updated 2 months ago
- The official codebase for ManipLLM: Embodied Multimodal Large Language Model for Object-Centric Robotic Manipulation(cvpr 2024)☆139Updated last year
- [CoRL2024] Official repo of `A3VLM: Actionable Articulation-Aware Vision Language Model`☆115Updated 10 months ago
- ☆91Updated last week
- [arXiv 2023] Embodied Task Planning with Large Language Models☆188Updated last year
- A collection of vision-language-action model post-training methods.☆85Updated last week
- Code for training embodied agents using IL and RL finetuning at scale for ObjectNav☆77Updated 3 months ago
- Official implementation of "Data Scaling Laws in Imitation Learning for Robotic Manipulation"☆181Updated 8 months ago
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆135Updated 4 months ago
- [ICRA 25] FLaRe: Achieving Masterful and Adaptive Robot Policies with Large-Scale Reinforcement Learning Fine-Tuning☆30Updated 7 months ago
- Code for Reinforcement Learning from Vision Language Foundation Model Feedback☆117Updated last year
- PoliFormer: Scaling On-Policy RL with Transformers Results in Masterful Navigators☆85Updated 8 months ago
- Manipulate-Anything: Automating Real-World Robots using Vision-Language Models [CoRL 2024]☆42Updated 4 months ago
- Vision-Language Navigation Benchmark in Isaac Lab☆216Updated 2 months ago
- Official Code For VLA-OS.☆78Updated last month
- This is the official implementation of the paper "ConRFT: A Reinforced Fine-tuning Method for VLA Models via Consistency Policy".☆225Updated 2 months ago
- Official GitHub Repository for Paper "Bridging Zero-shot Object Navigation and Foundation Models through Pixel-Guided Navigation Skill", …☆112Updated 9 months ago
- [ICLR 2025 Oral] Seer: Predictive Inverse Dynamics Models are Scalable Learners for Robotic Manipulation☆211Updated last month
- [CoRL 2024] RoboEXP: Action-Conditioned Scene Graph via Interactive Exploration for Robotic Manipulation☆110Updated 10 months ago
- GraspVLA: a Grasping Foundation Model Pre-trained on Billion-scale Synthetic Action Data☆177Updated 2 weeks ago
- Single-file implementation to advance vision-language-action (VLA) models with reinforcement learning.☆190Updated 2 weeks ago