alibaba-damo-academy / RynnVLA-002Links
RynnVLA-002: A Unified Vision-Language-Action and World Model
β859Updated last month
Alternatives and similar repositories for RynnVLA-002
Users that are interested in RynnVLA-002 are comparing it to the libraries listed below
Sorting:
- [RSS 2025] Learning to Act Anywhere with Task-centric Latent Actionsβ968Updated 2 months ago
- π₯ SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.β637Updated 7 months ago
- StarVLA: A Lego-like Codebase for Vision-Language-Action Model Developingβ862Updated last week
- The offical Implementation of "Soft-Prompted Transformer as Scalable Cross-Embodiment Vision-Language-Action Model"β461Updated this week
- Building General-Purpose Robots Based on Embodied Foundation Modelβ731Updated last month
- Video Prediction Policy: A Generalist Robot Policy with Predictive Visual Representations https://video-prediction-policy.github.ioβ334Updated 8 months ago
- Spirit-v1.5: A Robotic Foundation Model by Spirit AIβ465Updated 2 weeks ago
- SimpleVLA-RL: Scaling VLA Training via Reinforcement Learningβ1,302Updated 3 weeks ago
- β422Updated last month
- Dexbotic: Open-Source Vision-Language-Action Toolboxβ664Updated last week
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Modelβ336Updated 3 months ago
- InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policyβ344Updated 3 weeks ago
- Unified Vision-Language-Action Modelβ263Updated 3 months ago
- Single-file implementation to advance vision-language-action (VLA) models with reinforcement learning.β388Updated 2 months ago
- OpenHelix: An Open-source Dual-System VLA Model for Robotic Manipulationβ337Updated 5 months ago
- β367Updated this week
- Paper list in the survey: A Survey on Vision-Language-Action Models: An Action Tokenization Perspectiveβ419Updated 6 months ago
- RynnVLA-001: Using Human Demonstrations to Improve Robot Manipulationβ280Updated this week
- Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Successβ991Updated 4 months ago
- A Foundational Vision-Language-Action Model for Synergizing Cognition and Action in Robotic Manipulationβ400Updated 2 months ago
- [ICLR 2025] LAPA: Latent Action Pretraining from Videosβ445Updated last year
- β449Updated this week
- Official code of RDT 2β626Updated last month
- β832Updated 3 months ago
- A curated list of large VLM-based VLA models for robotic manipulation.β328Updated last month
- LLaVA-VLA: A Simple Yet Powerful Vision-Language-Action Model [Actively Maintainedπ₯]β174Updated 2 months ago
- Galaxea's first VLA releaseβ498Updated last week
- Official repo of VLABench, a large scale benchmark designed for fairly evaluating VLA, Embodied Agent, and VLMs.β376Updated 2 months ago
- OpenVLA: An open-source vision-language-action model for robotic manipulation.β329Updated 10 months ago
- The official repo for "SpatialBot: Precise Spatial Understanding with Vision Language Models.β329Updated 4 months ago