OpenVLA: An open-source vision-language-action model for robotic manipulation.
☆347Mar 19, 2025Updated 11 months ago
Alternatives and similar repositories for openvla-mini
Users that are interested in openvla-mini are comparing it to the libraries listed below
Sorting:
- ☆443Nov 29, 2025Updated 3 months ago
- Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Success☆1,057Sep 9, 2025Updated 5 months ago
- Evaluating and reproducing real-world robot manipulation policies (e.g., RT-1, RT-1-X, Octo) in simulation under common setups (e.g., Goo…☆991Dec 20, 2025Updated 2 months ago
- Code for FLIP: Flow-Centric Generative Planning for General-Purpose Manipulation Tasks☆79Dec 12, 2024Updated last year
- OpenVLA: An open-source vision-language-action model for robotic manipulation.☆5,383Mar 23, 2025Updated 11 months ago
- Re-implementation of pi0 vision-language-action (VLA) model from Physical Intelligence☆1,404Jan 31, 2025Updated last year
- ☆44Mar 11, 2025Updated 11 months ago
- A Foundational Vision-Language-Action Model for Synergizing Cognition and Action in Robotic Manipulation☆406Oct 30, 2025Updated 4 months ago
- Official repo of VLABench, a large scale benchmark designed for fairly evaluating VLA, Embodied Agent, and VLMs.☆390Nov 11, 2025Updated 3 months ago
- Benchmarking Knowledge Transfer in Lifelong Robot Learning☆1,544Mar 15, 2025Updated 11 months ago
- [ICLR 2025] LAPA: Latent Action Pretraining from Videos☆472Jan 22, 2025Updated last year
- Octo is a transformer-based robot policy trained on a diverse mix of 800k robot trajectories.☆1,560Jul 31, 2024Updated last year
- Official code for "Behavior Generation with Latent Actions" (ICML 2024 Spotlight)☆198Feb 28, 2024Updated 2 years ago
- [CVPR 2025] The offical Implementation of "Universal Actions for Enhanced Embodied Foundation Models"☆234Nov 6, 2025Updated 4 months ago
- Official PyTorch Implementation of Unified Video Action Model (RSS 2025)☆342Jul 23, 2025Updated 7 months ago
- RDT-1B: a Diffusion Foundation Model for Bimanual Manipulation☆1,625Jan 21, 2026Updated last month
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model☆339Oct 3, 2025Updated 5 months ago
- Embodied Chain of Thought: A robotic policy that reason to solve the task.☆370Apr 5, 2025Updated 11 months ago
- [ICCV2025 Oral] Latent Motion Token as the Bridging Language for Learning Robot Manipulation from Videos☆164Oct 1, 2025Updated 5 months ago
- ☆10,475Dec 27, 2025Updated 2 months ago
- Code for the paper "3D Diffuser Actor: Policy Diffusion with 3D Scene Representations"☆384Aug 17, 2024Updated last year
- RoboVerse: Towards a Unified Platform, Dataset and Benchmark for Scalable and Generalizable Robot Learning☆1,672Feb 23, 2026Updated last week
- robomimic: A Modular Framework for Robot Learning from Demonstration☆1,309Feb 5, 2026Updated last month
- Heterogeneous Pre-trained Transformer (HPT) as Scalable Policy Learner.☆531Dec 6, 2024Updated last year
- AutoEval: Autonomous Evaluation of Generalist Robot Manipulation Policies in the Real World | CoRL 2025☆93Jan 30, 2026Updated last month
- DROID Policy Learning and Evaluation☆270Apr 22, 2025Updated 10 months ago
- Evaluating and reproducing real-world robot manipulation policies (e.g., RT-1, RT-1-X, Octo, and OpenVLA) in simulation under common setu…☆264Jun 23, 2025Updated 8 months ago
- Official PyTorch implementation of AdaFlow☆63Nov 8, 2024Updated last year
- [RSS 2024] 3D Diffusion Policy: Generalizable Visuomotor Policy Learning via Simple 3D Representations☆1,274Oct 17, 2025Updated 4 months ago
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"☆301Apr 22, 2024Updated last year
- Official implementation for Mobi-π.☆110Jun 5, 2025Updated 9 months ago
- [ECCV 2024] ManiGaussian: Dynamic Gaussian Splatting for Multi-task Robotic Manipulation☆268Mar 24, 2025Updated 11 months ago
- [ICML 2025] OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction☆115Apr 14, 2025Updated 10 months ago
- SAPIEN Manipulation Skill Framework, an open source GPU parallelized robotics simulator and benchmark, led by Hillbot, Inc.☆2,629Updated this week
- 🔥 SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.☆665Jun 23, 2025Updated 8 months ago
- ☆278Aug 26, 2024Updated last year
- Code for Point Policy: Unifying Observations and Actions with Key Points for Robot Manipulation☆90Jul 21, 2025Updated 7 months ago
- [CoRL 2025] Pretraining code for FLOWER VLA on OXE☆32Sep 22, 2025Updated 5 months ago
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆159Apr 6, 2025Updated 11 months ago