alibaba-damo-academy / RynnVLA-001Links
RynnVLA-001: Using Human Demonstrations to Improve Robot Manipulation
☆268Updated last week
Alternatives and similar repositories for RynnVLA-001
Users that are interested in RynnVLA-001 are comparing it to the libraries listed below
Sorting:
- Video Prediction Policy: A Generalist Robot Policy with Predictive Visual Representations https://video-prediction-policy.github.io☆308Updated 6 months ago
- Official implementation of "OneTwoVLA: A Unified Vision-Language-Action Model with Adaptive Reasoning"☆203Updated 6 months ago
- [CVPR 2025] The offical Implementation of "Universal Actions for Enhanced Embodied Foundation Models"☆216Updated last month
- Official Code For VLA-OS.☆130Updated 5 months ago
- InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policy☆305Updated last month
- ☆335Updated this week
- F1: A Vision Language Action Model Bridging Understanding and Generation to Actions☆138Updated last month
- ICCV2025☆143Updated 3 weeks ago
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model☆324Updated 2 months ago
- Galaxea's first VLA release☆322Updated last month
- InstructVLA: Vision-Language-Action Instruction Tuning from Understanding to Manipulation☆82Updated 2 months ago
- Unified Vision-Language-Action Model☆245Updated last month
- Ctrl-World: A Controllable Generative World Model for Robot Manipualtion☆198Updated last week
- Single-file implementation to advance vision-language-action (VLA) models with reinforcement learning.☆362Updated last month
- LLaVA-VLA: A Simple Yet Powerful Vision-Language-Action Model [Actively Maintained🔥]☆173Updated last month
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"☆295Updated last year
- GraspVLA: a Grasping Foundation Model Pre-trained on Billion-scale Synthetic Action Data☆291Updated 4 months ago
- Being-H0: Vision-Language-Action Pretraining from Large-Scale Human Videos☆186Updated 3 months ago
- OpenHelix: An Open-source Dual-System VLA Model for Robotic Manipulation☆330Updated 3 months ago
- Fast-in-Slow: A Dual-System Foundation Model Unifying Fast Manipulation within Slow Reasoning☆129Updated 4 months ago
- [ICLR 2025 Oral] Seer: Predictive Inverse Dynamics Models are Scalable Learners for Robotic Manipulation☆264Updated 5 months ago
- Official PyTorch Implementation of Unified Video Action Model (RSS 2025)☆303Updated 4 months ago
- The official repo for "SpatialBot: Precise Spatial Understanding with Vision Language Models.☆319Updated 2 months ago
- The repo of paper `RoboMamba: Multimodal State Space Model for Efficient Robot Reasoning and Manipulation`☆143Updated 11 months ago
- ☆414Updated 2 weeks ago
- VLAC: A Vision-Language-Action-Critic Model for Robotic Real-World Reinforcement Learning☆244Updated 2 months ago
- A comprehensive list of papers about dual-system VLA models, including papers, codes, and related websites.☆86Updated 3 weeks ago
- An All-in-one robot manipulation learning suite for policy models training and evaluation on various datasets and benchmarks.☆162Updated last month
- Official code for EWMBench: Evaluating Scene, Motion, and Semantic Quality in Embodied World Models☆94Updated 5 months ago
- The offical Implementation of "Soft-Prompted Transformer as Scalable Cross-Embodiment Vision-Language-Action Model"☆316Updated last week