alibaba-damo-academy / RynnRCPLinks
Rynn Robotics Context Protocol
β91Updated this week
Alternatives and similar repositories for RynnRCP
Users that are interested in RynnRCP are comparing it to the libraries listed below
Sorting:
- π€ RoboOS: A Universal Embodied Operating System for Cross-Embodied and Multi-Robot Collaborationβ228Updated last month
- [CVPR 2025] RoboBrain: A Unified Brain Model for Robotic Manipulation from Abstract to Concrete. Official Repository.β323Updated 2 weeks ago
- Official Algorithm Codebase for the Paper "BEHAVIOR Robot Suite: Streamlining Real-World Whole-Body Manipulation for Everyday Household Aβ¦β145Updated 2 months ago
- RynnVLA-001: Using Human Demonstrations to Improve Robot Manipulationβ240Updated last month
- β657Updated 3 weeks ago
- Helpful DoggyBot: Open-World Object Fetching using Legged Robots and Vision-Language Modelsβ119Updated last year
- εΊδΊInternLM2倧樑εηη¦»ηΊΏε ·θΊ«ζΊθ½ε―Όη²η¬β105Updated last year
- β54Updated 6 months ago
- β64Updated last week
- This repo is designed for General Robotic Operation Systemβ144Updated 9 months ago
- β29Updated 3 months ago
- Official implementation of CEED-VLA: Consistency Vision-Language-Action Model with Early-Exit Decoding.β40Updated last month
- β73Updated 7 months ago
- The Simulation Framework from AgiBotβ304Updated last month
- NORA: A Small Open-Sourced Generalist Vision Language Action Model for Embodied Tasksβ180Updated 2 months ago
- β33Updated last year
- Dexbotic: Open-Source Vision-Language-Action Toolboxβ210Updated last week
- Building General-Purpose Robots Based on Embodied Foundation Modelβ554Updated this week
- Official Code for EnerVerse-AC: Envisioning EmbodiedEnvironments with Action Conditionβ123Updated 3 months ago
- Galaxea's first VLA releaseβ288Updated this week
- β190Updated 3 weeks ago
- Video Prediction Policy: A Generalist Robot Policy with Predictive Visual Representations https://video-prediction-policy.github.ioβ283Updated 5 months ago
- β291Updated 2 weeks ago
- Embodied-Reasoner: Synergizing Visual Search, Reasoning, and Action for Embodied Interactive Tasksβ176Updated last month
- Official Implementation of "Align-Then-stEer: Adapting the Vision-Language Action Models through Unified Latent Guidance".β33Updated last week
- Demo of robotics.β55Updated last year
- Nav-R1: Reasoning and Navigation in Embodied Scenesβ59Updated 3 weeks ago
- Legged Open-Vocabulary Object Navigatorβ56Updated 2 weeks ago
- π₯ SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.β541Updated 4 months ago
- RoboBrain 2.0: Advanced version of RoboBrain. See Better. Think Harder. Do Smarter. πππβ664Updated 3 weeks ago