alibaba-damo-academy / RynnVLA-001Links
RynnVLA-001: Using Human Demonstrations to Improve Robot Manipulation
☆277Updated last month
Alternatives and similar repositories for RynnVLA-001
Users that are interested in RynnVLA-001 are comparing it to the libraries listed below
Sorting:
- Official implementation of "OneTwoVLA: A Unified Vision-Language-Action Model with Adaptive Reasoning"☆206Updated 7 months ago
- InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policy☆342Updated 2 weeks ago
- Video Prediction Policy: A Generalist Robot Policy with Predictive Visual Representations https://video-prediction-policy.github.io☆328Updated 8 months ago
- Official Code For VLA-OS.☆136Updated 6 months ago
- F1: A Vision Language Action Model Bridging Understanding and Generation to Actions☆154Updated 2 weeks ago
- The offical Implementation of "Soft-Prompted Transformer as Scalable Cross-Embodiment Vision-Language-Action Model"☆436Updated last week
- ICCV2025☆146Updated last month
- InternVLA-A1: Unifying Understanding, Generation, and Action for Robotic Manipulation☆263Updated last week
- [CVPR 2025] The offical Implementation of "Universal Actions for Enhanced Embodied Foundation Models"☆224Updated 2 months ago
- Unified Vision-Language-Action Model☆260Updated 3 months ago
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model☆333Updated 3 months ago
- ☆360Updated this week
- Fast-in-Slow: A Dual-System Foundation Model Unifying Fast Manipulation within Slow Reasoning☆138Updated 5 months ago
- Spirit-v1.5: A Robotic Foundation Model by Spirit AI☆428Updated last week
- Galaxea's first VLA release☆488Updated 2 weeks ago
- InstructVLA: Vision-Language-Action Instruction Tuning from Understanding to Manipulation☆91Updated 3 months ago
- The repo of paper `RoboMamba: Multimodal State Space Model for Efficient Robot Reasoning and Manipulation`☆149Updated last year
- Being-H0.5: Scaling Human-Centric Robot Learning for Cross-Embodiment Generalization☆221Updated this week
- [NeurIPS 2025] DreamVLA: A Vision-Language-Action Model Dreamed with Comprehensive World Knowledge☆279Updated 2 weeks ago
- VITRA: Scalable Vision-Language-Action Model Pretraining for Robotic Manipulation with Real-Life Human Activity Videos☆257Updated 3 weeks ago
- ✨✨【NeurIPS 2025】Official implementation of BridgeVLA☆164Updated 4 months ago
- [CoRL25] GraspVLA: a Grasping Foundation Model Pre-trained on Billion-scale Synthetic Action Data☆318Updated 3 weeks ago
- 🦾 A Dual-System VLA with System2 Thinking☆131Updated 5 months ago
- The official repo for "SpatialBot: Precise Spatial Understanding with Vision Language Models.☆327Updated 4 months ago
- OpenHelix: An Open-source Dual-System VLA Model for Robotic Manipulation☆337Updated 4 months ago
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"☆299Updated last year
- Single-file implementation to advance vision-language-action (VLA) models with reinforcement learning.☆383Updated 2 months ago
- VLAC: A Vision-Language-Action-Critic Model for Robotic Real-World Reinforcement Learning☆259Updated 3 months ago
- [CVPR 2025]Lift3D Foundation Policy: Lifting 2D Large-Scale Pretrained Models for Robust 3D Robotic Manipulation☆174Updated 7 months ago
- [NeurIPS 2025 Spotlight] SoFar: Language-Grounded Orientation Bridges Spatial Reasoning and Object Manipulation☆221Updated 6 months ago