moojink / openvla-oftView external linksLinks
Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Success
☆1,019Sep 9, 2025Updated 5 months ago
Alternatives and similar repositories for openvla-oft
Users that are interested in openvla-oft are comparing it to the libraries listed below
Sorting:
- OpenVLA: An open-source vision-language-action model for robotic manipulation.☆5,251Mar 23, 2025Updated 10 months ago
- Benchmarking Knowledge Transfer in Lifelong Robot Learning☆1,459Mar 15, 2025Updated 10 months ago
- Evaluating and reproducing real-world robot manipulation policies (e.g., RT-1, RT-1-X, Octo) in simulation under common setups (e.g., Goo…☆968Dec 20, 2025Updated last month
- [RSS 2025] Learning to Act Anywhere with Task-centric Latent Actions☆984Nov 19, 2025Updated 2 months ago
- ☆432Nov 29, 2025Updated 2 months ago
- ☆10,160Dec 27, 2025Updated last month
- RDT-1B: a Diffusion Foundation Model for Bimanual Manipulation☆1,614Jan 21, 2026Updated 3 weeks ago
- [ICLR 2026] SimpleVLA-RL: Scaling VLA Training via Reinforcement Learning☆1,380Jan 6, 2026Updated last month
- A Foundational Vision-Language-Action Model for Synergizing Cognition and Action in Robotic Manipulation☆405Oct 30, 2025Updated 3 months ago
- Octo is a transformer-based robot policy trained on a diverse mix of 800k robot trajectories.☆1,536Jul 31, 2024Updated last year
- [ICLR 2025] LAPA: Latent Action Pretraining from Videos☆460Jan 22, 2025Updated last year
- Official repo of VLABench, a large scale benchmark designed for fairly evaluating VLA, Embodied Agent, and VLMs.☆381Nov 11, 2025Updated 3 months ago
- OpenVLA: An open-source vision-language-action model for robotic manipulation.☆334Mar 19, 2025Updated 10 months ago
- Embodied Chain of Thought: A robotic policy that reason to solve the task.☆364Apr 5, 2025Updated 10 months ago
- ☆38Apr 15, 2025Updated 9 months ago
- Re-implementation of pi0 vision-language-action (VLA) model from Physical Intelligence☆1,384Jan 31, 2025Updated last year
- 🔥 SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.☆645Jun 23, 2025Updated 7 months ago
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model☆336Oct 3, 2025Updated 4 months ago
- [ICLR 2025 Oral] Seer: Predictive Inverse Dynamics Models are Scalable Learners for Robotic Manipulation☆277Jul 8, 2025Updated 7 months ago
- RoboTwin 2.0 Offical Repo☆1,934Updated this week
- [RSS 2023] Diffusion Policy Visuomotor Policy Learning via Action Diffusion☆3,755Dec 24, 2024Updated last year
- 🎁 A collection of utilities for LeRobot.☆854Updated this week
- Single-file implementation to advance vision-language-action (VLA) models with reinforcement learning.☆399Nov 8, 2025Updated 3 months ago
- ICCV2025☆153Dec 10, 2025Updated 2 months ago
- RoboVerse: Towards a Unified Platform, Dataset and Benchmark for Scalable and Generalizable Robot Learning☆1,663Updated this week
- SAPIEN Manipulation Skill Framework, an open source GPU parallelized robotics simulator and benchmark, led by Hillbot, Inc.☆2,542Jan 31, 2026Updated last week
- Code for FLIP: Flow-Centric Generative Planning for General-Purpose Manipulation Tasks☆79Dec 12, 2024Updated last year
- [IROS 2025 Best Paper Award Finalist & IEEE TRO 2026] The Large-scale Manipulation Platform for Scalable and Intelligent Embodied Systems☆2,775Dec 16, 2025Updated last month
- CALVIN - A benchmark for Language-Conditioned Policy Learning for Long-Horizon Robot Manipulation Tasks☆829Sep 8, 2025Updated 5 months ago
- [ICML 2025] OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction☆115Apr 14, 2025Updated 9 months ago
- [RSS 2024] 3D Diffusion Policy: Generalizable Visuomotor Policy Learning via Simple 3D Representations☆1,243Oct 17, 2025Updated 3 months ago
- ☆1,155Oct 27, 2025Updated 3 months ago
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"☆301Apr 22, 2024Updated last year
- Heterogeneous Pre-trained Transformer (HPT) as Scalable Policy Learner.☆524Dec 6, 2024Updated last year
- [AAAI'26 Oral] DexGraspVLA: A Vision-Language-Action Framework Towards General Dexterous Grasping☆467Aug 10, 2025Updated 6 months ago
- Code for Point Policy: Unifying Observations and Actions with Key Points for Robot Manipulation☆90Jul 21, 2025Updated 6 months ago
- [ICML 2024] 3D-VLA: A 3D Vision-Language-Action Generative World Model☆618Oct 29, 2024Updated last year
- Official PyTorch Implementation of Unified Video Action Model (RSS 2025)☆331Jul 23, 2025Updated 6 months ago
- ☆1,733Jul 23, 2024Updated last year