[ICLR 2026] SimpleVLA-RL: Scaling VLA Training via Reinforcement Learning
☆1,508Jan 6, 2026Updated 2 months ago
Alternatives and similar repositories for SimpleVLA-RL
Users that are interested in SimpleVLA-RL are comparing it to the libraries listed below
Sorting:
- Single-file implementation to advance vision-language-action (VLA) models with reinforcement learning.☆409Nov 8, 2025Updated 4 months ago
- Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Success☆1,094Sep 9, 2025Updated 6 months ago
- ☆248Aug 25, 2025Updated 6 months ago
- [RSS 2025] Learning to Act Anywhere with Task-centric Latent Actions☆1,023Nov 19, 2025Updated 4 months ago
- Benchmarking Knowledge Transfer in Lifelong Robot Learning☆1,611Mar 15, 2025Updated last year
- This is the official implementation of the paper "ConRFT: A Reinforced Fine-tuning Method for VLA Models via Consistency Policy".☆332Nov 11, 2025Updated 4 months ago
- ☆10,755Updated this week
- ☆1,203Oct 27, 2025Updated 4 months ago
- This repository summarizes recent advances in the VLA + RL paradigm and provides a taxonomic classification of relevant works.☆400Oct 10, 2025Updated 5 months ago
- OpenVLA: An open-source vision-language-action model for robotic manipulation.☆5,542Mar 23, 2025Updated 11 months ago
- RoboTwin 2.0 Offical Repo☆2,053Updated this week
- Official code of RDT 2☆740Feb 7, 2026Updated last month
- Evaluating and reproducing real-world robot manipulation policies (e.g., RT-1, RT-1-X, Octo) in simulation under common setups (e.g., Goo…☆1,005Dec 20, 2025Updated 3 months ago
- RLinf: Reinforcement Learning Infrastructure for Embodied and Agentic AI☆2,815Updated this week
- [RSS 2024] 3D Diffusion Policy: Generalizable Visuomotor Policy Learning via Simple 3D Representations☆1,294Oct 17, 2025Updated 5 months ago
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"☆303Apr 22, 2024Updated last year
- RDT-1B: a Diffusion Foundation Model for Bimanual Manipulation☆1,650Jan 21, 2026Updated 2 months ago
- Interactive Post-Training for Vision-Language-Action Models☆163Jun 4, 2025Updated 9 months ago
- [RSS 2023] Diffusion Policy Visuomotor Policy Learning via Action Diffusion☆3,886Dec 24, 2024Updated last year
- RoboVerse: Towards a Unified Platform, Dataset and Benchmark for Scalable and Generalizable Robot Learning☆1,683Mar 9, 2026Updated last week
- [IROS 2025 Best Paper Award Finalist & IEEE TRO 2026] The Large-scale Manipulation Platform for Scalable and Intelligent Embodied Systems☆2,821Dec 16, 2025Updated 3 months ago
- Re-implementation of pi0 vision-language-action (VLA) model from Physical Intelligence☆1,424Jan 31, 2025Updated last year
- A Foundational Vision-Language-Action Model for Synergizing Cognition and Action in Robotic Manipulation☆406Oct 30, 2025Updated 4 months ago
- Official repo of VLABench, a large scale benchmark designed for fairly evaluating VLA, Embodied Agent, and VLMs.☆405Nov 11, 2025Updated 4 months ago
- Embodied Chain of Thought: A robotic policy that reason to solve the task.☆374Apr 5, 2025Updated 11 months ago
- A curated list of state-of-the-art research in embodied AI, focusing on vision-language-action (VLA) models, vision-language navigation (…☆2,780Updated this week
- 🔥 SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.☆673Jun 23, 2025Updated 8 months ago
- SAPIEN Manipulation Skill Framework, an open source GPU parallelized robotics simulator and benchmark, led by Hillbot, Inc.☆2,679Mar 5, 2026Updated 2 weeks ago
- 🦾 A Dual-System VLA with System2 Thinking☆136Aug 21, 2025Updated 7 months ago
- Official PyTorch Implementation of Unified Video Action Model (RSS 2025)☆350Jul 23, 2025Updated 7 months ago
- ☆446Nov 29, 2025Updated 3 months ago
- Official implementation of "OneTwoVLA: A Unified Vision-Language-Action Model with Adaptive Reasoning"☆217May 30, 2025Updated 9 months ago
- [ICLR 2025 Oral] Seer: Predictive Inverse Dynamics Models are Scalable Learners for Robotic Manipulation☆283Jul 8, 2025Updated 8 months ago
- [CoRL25] GraspVLA: a Grasping Foundation Model Pre-trained on Billion-scale Synthetic Action Data☆349Dec 29, 2025Updated 2 months ago
- RynnVLA-002: A Unified Vision-Language-Action and World Model☆947Dec 2, 2025Updated 3 months ago
- DreamGen: Nvidia GEAR Lab's initiative to solve the robotics data problem using world models☆505Oct 24, 2025Updated 4 months ago
- StarVLA: A Lego-like Codebase for Vision-Language-Action Model Developing☆1,378Mar 13, 2026Updated last week
- [ICLR 2025] LAPA: Latent Action Pretraining from Videos☆485Jan 22, 2025Updated last year
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model☆344Oct 3, 2025Updated 5 months ago