RynnVLA-002: A Unified Vision-Language-Action and World Model
☆955Dec 2, 2025Updated 3 months ago
Alternatives and similar repositories for RynnVLA-002
Users that are interested in RynnVLA-002 are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Official PyTorch Implementation of Unified Video Action Model (RSS 2025)☆356Jul 23, 2025Updated 8 months ago
- Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Success☆1,105Sep 9, 2025Updated 6 months ago
- Single-file implementation to advance vision-language-action (VLA) models with reinforcement learning.☆414Nov 8, 2025Updated 4 months ago
- [RSS 2025] Learning to Act Anywhere with Task-centric Latent Actions☆1,028Nov 19, 2025Updated 4 months ago
- [AAAI'26 Oral] DexGraspVLA: A Vision-Language-Action Framework Towards General Dexterous Grasping☆491Aug 10, 2025Updated 7 months ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- [ICLR 2026] SimpleVLA-RL: Scaling VLA Training via Reinforcement Learning☆1,524Jan 6, 2026Updated 2 months ago
- [NeurIPS 2025] DreamVLA: A Vision-Language-Action Model Dreamed with Comprehensive World Knowledge☆309Jan 6, 2026Updated 2 months ago
- OpenVLA: An open-source vision-language-action model for robotic manipulation.☆5,715Mar 23, 2025Updated last year
- Unfied World Models: Coupling Video and Action Diffusion for Pretraining on Large Robotic Datasets☆201Oct 8, 2025Updated 5 months ago
- [ICRA 2026] RynnVLA-001: Using Human Demonstrations to Improve Robot Manipulation☆286Jan 23, 2026Updated 2 months ago
- 🔥 SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.☆675Jun 23, 2025Updated 9 months ago
- Benchmarking Knowledge Transfer in Lifelong Robot Learning☆1,644Mar 15, 2025Updated last year
- [ICLR 2025] LAPA: Latent Action Pretraining from Videos☆497Jan 22, 2025Updated last year
- This is the official implementation of the paper "ConRFT: A Reinforced Fine-tuning Method for VLA Models via Consistency Policy".☆338Updated this week
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- Official repo of VLABench, a large scale benchmark designed for fairly evaluating VLA, Embodied Agent, and VLMs.☆409Nov 11, 2025Updated 4 months ago
- [ICML 2024] 3D-VLA: A 3D Vision-Language-Action Generative World Model☆621Oct 29, 2024Updated last year
- Causal video-action world model for generalist robot control☆892Feb 27, 2026Updated last month
- Building General-Purpose Robots Based on Embodied Foundation Model☆799Feb 11, 2026Updated last month
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model☆343Oct 3, 2025Updated 5 months ago
- Spirit-v1.5: A Robotic Foundation Model by Spirit AI☆540Jan 14, 2026Updated 2 months ago
- RDT-1B: a Diffusion Foundation Model for Bimanual Manipulation☆1,657Jan 21, 2026Updated 2 months ago
- ☆10,884Mar 20, 2026Updated last week
- Code to pretrain, fine-tune, and evaluate DreamZero and run sim & real-world evals☆1,413Mar 18, 2026Updated last week
- End-to-end encrypted cloud storage - Proton Drive • AdSpecial offer: 40% Off Yearly / 80% Off First Month. Protect your most important files, photos, and documents from prying eyes.
- Video Prediction Policy: A Generalist Robot Policy with Predictive Visual Representations https://video-prediction-policy.github.io☆369May 17, 2025Updated 10 months ago
- ☆458Nov 29, 2025Updated 4 months ago
- A curated list of state-of-the-art research in embodied AI, focusing on vision-language-action (VLA) models, vision-language navigation (…☆2,817Mar 20, 2026Updated last week
- [IROS 2025 Best Paper Award Finalist & IEEE TRO 2026] The Large-scale Manipulation Platform for Scalable and Intelligent Embodied Systems☆2,830Dec 16, 2025Updated 3 months ago
- ☆250Aug 25, 2025Updated 7 months ago
- ☆427Updated this week
- ReKep: Spatio-Temporal Reasoning of Relational Keypoint Constraints for Robotic Manipulation☆924Feb 20, 2025Updated last year
- [ICLR 2026] Unified Vision-Language-Action Model☆290Oct 15, 2025Updated 5 months ago
- Evaluating and reproducing real-world robot manipulation policies (e.g., RT-1, RT-1-X, Octo) in simulation under common setups (e.g., Goo…☆1,015Dec 20, 2025Updated 3 months ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- ICLR 2026 Paper: Ctrl-World☆379Feb 28, 2026Updated last month
- Official code of RDT 2☆748Feb 7, 2026Updated last month
- ☆941Mar 18, 2026Updated last week
- Galaxea's open-source VLA repository☆556Feb 14, 2026Updated last month
- Being-H0.5: Scaling Human-Centric Robot Learning for Cross-Embodiment Generalization☆366Jan 27, 2026Updated 2 months ago
- [ICCV2025 Oral] Latent Motion Token as the Bridging Language for Learning Robot Manipulation from Videos☆169Oct 1, 2025Updated 5 months ago
- DreamGen: Nvidia GEAR Lab's initiative to solve the robotics data problem using world models☆514Oct 24, 2025Updated 5 months ago