PRIME-RL / SimpleVLA-RLLinks
[ICLR 2026] SimpleVLA-RL: Scaling VLA Training via Reinforcement Learning
☆1,360Updated last month
Alternatives and similar repositories for SimpleVLA-RL
Users that are interested in SimpleVLA-RL are comparing it to the libraries listed below
Sorting:
- Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Success☆1,019Updated 5 months ago
- StarVLA: A Lego-like Codebase for Vision-Language-Action Model Developing☆1,043Updated last week
- Benchmarking Knowledge Transfer in Lifelong Robot Learning☆1,459Updated 10 months ago
- [RSS 2025] Learning to Act Anywhere with Task-centric Latent Actions☆981Updated 2 months ago
- ☆457Updated this week
- Building General-Purpose Robots Based on Embodied Foundation Model☆759Updated this week
- Evaluating and reproducing real-world robot manipulation policies (e.g., RT-1, RT-1-X, Octo) in simulation under common setups (e.g., Goo…☆968Updated last month
- Dexbotic: Open-Source Vision-Language-Action Toolbox☆688Updated 2 weeks ago
- ☆430Updated 2 months ago
- Paper list in the survey: A Survey on Vision-Language-Action Models: An Action Tokenization Perspective☆439Updated 7 months ago
- 🎁 A collection of utilities for LeRobot.☆854Updated this week
- A paper list of my history reading. Robotics, Learning, Vision.☆511Updated last month
- Single-file implementation to advance vision-language-action (VLA) models with reinforcement learning.☆394Updated 3 months ago
- RDT-1B: a Diffusion Foundation Model for Bimanual Manipulation☆1,610Updated 2 weeks ago
- [AAAI'26 Oral] DexGraspVLA: A Vision-Language-Action Framework Towards General Dexterous Grasping☆467Updated 5 months ago
- It's not a list of papers, but a list of paper reading lists...☆249Updated 9 months ago
- OpenHelix: An Open-source Dual-System VLA Model for Robotic Manipulation☆342Updated 5 months ago
- [Actively Maintained🔥] A list of Embodied AI papers accepted by top conferences (ICLR, NeurIPS, ICML, RSS, CoRL, ICRA, IROS, CVPR, ICCV,…☆472Updated 2 months ago
- Re-implementation of pi0 vision-language-action (VLA) model from Physical Intelligence☆1,384Updated last year
- 🔥 SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.☆645Updated 7 months ago
- Official code of RDT 2☆686Updated this week
- This repository summarizes recent advances in the VLA + RL paradigm and provides a taxonomic classification of relevant works.☆383Updated 3 months ago
- RynnVLA-002: A Unified Vision-Language-Action and World Model☆875Updated 2 months ago
- A comprehensive list of papers about Robot Manipulation, including papers, codes, and related websites.☆847Updated last month
- ☆864Updated 4 months ago
- A Survey on Reinforcement Learning of Vision-Language-Action Models for Robotic Manipulation☆484Updated 2 weeks ago
- [RSS 2024] 3D Diffusion Policy: Generalizable Visuomotor Policy Learning via Simple 3D Representations☆1,243Updated 3 months ago
- Embodied Chain of Thought: A robotic policy that reason to solve the task.☆364Updated 10 months ago
- CALVIN - A benchmark for Language-Conditioned Policy Learning for Long-Horizon Robot Manipulation Tasks☆824Updated 5 months ago
- ☆40Updated 10 months ago