alibaba-damo-academy / RynnVLA-001Links
RynnVLA-001: Using Human Demonstrations to Improve Robot Manipulation
☆274Updated 3 weeks ago
Alternatives and similar repositories for RynnVLA-001
Users that are interested in RynnVLA-001 are comparing it to the libraries listed below
Sorting:
- Video Prediction Policy: A Generalist Robot Policy with Predictive Visual Representations https://video-prediction-policy.github.io☆314Updated 7 months ago
- [CVPR 2025] The offical Implementation of "Universal Actions for Enhanced Embodied Foundation Models"☆222Updated last month
- Official implementation of "OneTwoVLA: A Unified Vision-Language-Action Model with Adaptive Reasoning"☆205Updated 7 months ago
- Official Code For VLA-OS.☆132Updated 6 months ago
- ☆344Updated last week
- F1: A Vision Language Action Model Bridging Understanding and Generation to Actions☆150Updated 2 months ago
- InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policy☆323Updated 2 weeks ago
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model☆331Updated 2 months ago
- Galaxea's first VLA release☆330Updated 2 months ago
- Unified Vision-Language-Action Model☆257Updated 2 months ago
- InstructVLA: Vision-Language-Action Instruction Tuning from Understanding to Manipulation☆88Updated 3 months ago
- ICCV2025☆145Updated 3 weeks ago
- GraspVLA: a Grasping Foundation Model Pre-trained on Billion-scale Synthetic Action Data☆308Updated 5 months ago
- The offical Implementation of "Soft-Prompted Transformer as Scalable Cross-Embodiment Vision-Language-Action Model"☆393Updated last week
- Official PyTorch Implementation of Unified Video Action Model (RSS 2025)☆309Updated 5 months ago
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"☆297Updated last year
- Single-file implementation to advance vision-language-action (VLA) models with reinforcement learning.☆372Updated last month
- The repo of paper `RoboMamba: Multimodal State Space Model for Efficient Robot Reasoning and Manipulation`☆148Updated last year
- OpenHelix: An Open-source Dual-System VLA Model for Robotic Manipulation☆332Updated 4 months ago
- The official repo for "SpatialBot: Precise Spatial Understanding with Vision Language Models.☆326Updated 3 months ago
- ☆420Updated last month
- [NeurIPS 2025] DreamVLA: A Vision-Language-Action Model Dreamed with Comprehensive World Knowledge☆263Updated 3 months ago
- VLAC: A Vision-Language-Action-Critic Model for Robotic Real-World Reinforcement Learning☆254Updated 3 months ago
- [ICLR 2025 Oral] Seer: Predictive Inverse Dynamics Models are Scalable Learners for Robotic Manipulation☆270Updated 5 months ago
- 🔥 SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.☆612Updated 6 months ago
- LLaVA-VLA: A Simple Yet Powerful Vision-Language-Action Model [Actively Maintained🔥]☆173Updated 2 months ago
- Official implementation of "Data Scaling Laws in Imitation Learning for Robotic Manipulation"☆197Updated last year
- [CVPR 2025] RoboBrain: A Unified Brain Model for Robotic Manipulation from Abstract to Concrete. Official Repository.☆354Updated 2 months ago
- Ctrl-World: A Controllable Generative World Model for Robot Manipualtion☆232Updated 3 weeks ago
- Team Comet's 2025 BEHAVIOR Challenge Codebase☆169Updated 2 weeks ago