alibaba-damo-academy / RynnVLA-001Links
RynnVLA-001: Using Human Demonstrations to Improve Robot Manipulation
☆250Updated 3 weeks ago
Alternatives and similar repositories for RynnVLA-001
Users that are interested in RynnVLA-001 are comparing it to the libraries listed below
Sorting:
- Video Prediction Policy: A Generalist Robot Policy with Predictive Visual Representations https://video-prediction-policy.github.io☆295Updated 6 months ago
- Official implementation of "OneTwoVLA: A Unified Vision-Language-Action Model with Adaptive Reasoning"☆198Updated 5 months ago
- Official Code For VLA-OS.☆123Updated 4 months ago
- [CVPR 2025] The offical Implementation of "Universal Actions for Enhanced Embodied Foundation Models"☆212Updated 2 weeks ago
- InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policy☆269Updated last week
- ☆313Updated this week
- ICCV2025☆142Updated last week
- Being-H0: Vision-Language-Action Pretraining from Large-Scale Human Videos☆179Updated 2 months ago
- Galaxea's first VLA release☆314Updated 3 weeks ago
- Official PyTorch Implementation of Unified Video Action Model (RSS 2025)☆291Updated 3 months ago
- F1: A Vision Language Action Model Bridging Understanding and Generation to Actions☆135Updated last month
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model☆318Updated last month
- Ctrl-World: A Controllable Generative World Model for Robot Manipualtion☆165Updated 3 weeks ago
- The official repo for "SpatialBot: Precise Spatial Understanding with Vision Language Models.☆317Updated 2 months ago
- WorldVLA: Towards Autoregressive Action World Model☆539Updated last month
- InstructVLA: Vision-Language-Action Instruction Tuning from Understanding to Manipulation☆66Updated last month
- OpenHelix: An Open-source Dual-System VLA Model for Robotic Manipulation☆323Updated 2 months ago
- Official repo of VLABench, a large scale benchmark designed for fairly evaluating VLA, Embodied Agent, and VLMs.☆332Updated last week
- Unified Vision-Language-Action Model☆226Updated last month
- [ICLR 2025 Oral] Seer: Predictive Inverse Dynamics Models are Scalable Learners for Robotic Manipulation☆260Updated 4 months ago
- [ICLR 2025] LAPA: Latent Action Pretraining from Videos☆403Updated 10 months ago
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"☆286Updated last year
- StarVLA: A Lego-like Codebase for Vision-Language-Action Model Developing☆456Updated this week
- GraspVLA: a Grasping Foundation Model Pre-trained on Billion-scale Synthetic Action Data☆273Updated 3 months ago
- ☆411Updated 9 months ago
- A Foundational Vision-Language-Action Model for Synergizing Cognition and Action in Robotic Manipulation☆377Updated 3 weeks ago
- Single-file implementation to advance vision-language-action (VLA) models with reinforcement learning.☆338Updated last week
- VLAC: A Vision-Language-Action-Critic Model for Robotic Real-World Reinforcement Learning☆222Updated last month
- ✨✨【NeurIPS 2025】Official implementation of BridgeVLA☆155Updated 2 months ago
- 🦾 A Dual-System VLA with System2 Thinking☆116Updated 3 months ago