alibaba-damo-academy / RynnVLA-001Links
RynnVLA-001: Using Human Demonstrations to Improve Robot Manipulation
☆241Updated this week
Alternatives and similar repositories for RynnVLA-001
Users that are interested in RynnVLA-001 are comparing it to the libraries listed below
Sorting:
- Video Prediction Policy: A Generalist Robot Policy with Predictive Visual Representations https://video-prediction-policy.github.io☆283Updated 5 months ago
- ☆301Updated last week
- Official implementation of "OneTwoVLA: A Unified Vision-Language-Action Model with Adaptive Reasoning"☆192Updated 5 months ago
- InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policy☆219Updated last week
- [CVPR 2025] The offical Implementation of "Universal Actions for Enhanced Embodied Foundation Models"☆207Updated 7 months ago
- Official Code For VLA-OS.☆117Updated 4 months ago
- WorldVLA: Towards Autoregressive Action World Model☆472Updated 3 weeks ago
- Galaxea's first VLA release☆302Updated last week
- Dexbotic: Open-Source Vision-Language-Action Toolbox☆338Updated this week
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model☆304Updated 3 weeks ago
- ICCV2025☆139Updated 2 months ago
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"☆285Updated last year
- Official PyTorch Implementation of Unified Video Action Model (RSS 2025)☆278Updated 3 months ago
- A Foundational Vision-Language-Action Model for Synergizing Cognition and Action in Robotic Manipulation☆364Updated this week
- The official repo for "SpatialBot: Precise Spatial Understanding with Vision Language Models.☆313Updated last month
- starVLA: A Lego-like Codebase for Vision-Language-Action Model Developing☆323Updated this week
- [ICLR 2025] LAPA: Latent Action Pretraining from Videos☆394Updated 9 months ago
- Being-H0: Vision-Language-Action Pretraining from Large-Scale Human Videos☆175Updated last month
- F1: A Vision Language Action Model Bridging Understanding and Generation to Actions☆123Updated last week
- 🔥 SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.☆553Updated 4 months ago
- Official repo of VLABench, a large scale benchmark designed for fairly evaluating VLA, Embodied Agent, and VLMs.☆316Updated 2 months ago
- ☆403Updated 9 months ago
- GraspVLA: a Grasping Foundation Model Pre-trained on Billion-scale Synthetic Action Data☆258Updated 3 months ago
- Paper list in the survey: A Survey on Vision-Language-Action Models: An Action Tokenization Perspective☆296Updated 3 months ago
- The repo of paper `RoboMamba: Multimodal State Space Model for Efficient Robot Reasoning and Manipulation`☆137Updated 10 months ago
- [ICLR 2025 Oral] Seer: Predictive Inverse Dynamics Models are Scalable Learners for Robotic Manipulation☆247Updated 3 months ago
- [NeurIPS 2025] DreamVLA: A Vision-Language-Action Model Dreamed with Comprehensive World Knowledge☆203Updated last month
- Single-file implementation to advance vision-language-action (VLA) models with reinforcement learning.☆319Updated last month
- [ICCV2025 Oral] Latent Motion Token as the Bridging Language for Learning Robot Manipulation from Videos☆143Updated last month
- InstructVLA: Vision-Language-Action Instruction Tuning from Understanding to Manipulation☆59Updated last month