AoqunJin / Awesome-VLA-Post-TrainingLinks
A collection of vision-language-action model post-training methods.
☆94Updated this week
Alternatives and similar repositories for Awesome-VLA-Post-Training
Users that are interested in Awesome-VLA-Post-Training are comparing it to the libraries listed below
Sorting:
- ☆129Updated this week
- This repository summarizes recent advances in the VLA + RL paradigm and provides a taxonomic classification of relevant works.☆239Updated 3 weeks ago
- [ICLR 2025 Oral] Seer: Predictive Inverse Dynamics Models are Scalable Learners for Robotic Manipulation☆218Updated last month
- Single-file implementation to advance vision-language-action (VLA) models with reinforcement learning.☆240Updated last month
- This is the official implementation of the paper "ConRFT: A Reinforced Fine-tuning Method for VLA Models via Consistency Policy".☆256Updated 2 months ago
- OpenHelix: An Open-source Dual-System VLA Model for Robotic Manipulation☆254Updated this week
- RoboDual: Dual-System for Robotic Manipulation☆89Updated last month
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆138Updated 4 months ago
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model☆277Updated 2 months ago
- Official repo of VLABench, a large scale benchmark designed for fairly evaluating VLA, Embodied Agent, and VLMs.☆281Updated 3 weeks ago
- [CVPR 2025] The offical Implementation of "Universal Actions for Enhanced Embodied Foundation Models"☆197Updated 5 months ago
- Code for Reinforcement Learning from Vision Language Foundation Model Feedback☆118Updated last year
- [RSS 2024] Code for "Multimodal Diffusion Transformer: Learning Versatile Behavior from Multimodal Goals" for CALVIN experiments with pre…☆151Updated 10 months ago
- Official implementation of "OneTwoVLA: A Unified Vision-Language-Action Model with Adaptive Reasoning"☆178Updated 3 months ago
- ☆388Updated 7 months ago
- Official Code For VLA-OS.☆101Updated 2 months ago
- Embodied Chain of Thought: A robotic policy that reason to solve the task.☆301Updated 4 months ago
- ☆198Updated 5 months ago
- OpenVLA: An open-source vision-language-action model for robotic manipulation.☆250Updated 5 months ago
- SoFar: Language-Grounded Orientation Bridges Spatial Reasoning and Object Manipulation☆188Updated 2 months ago
- Online RL with Simple Reward Enables Training VLA Models with Only One Trajectory☆374Updated 2 months ago
- Official codebase for "Any-point Trajectory Modeling for Policy Learning"☆243Updated 2 months ago
- Official implementation of GR-MG☆85Updated 7 months ago
- ✨✨Official implementation of BridgeVLA☆120Updated 2 months ago
- Pytorch PI-zero and PI-zero-fast. Adapted from LeRobot☆95Updated last month
- ☆296Updated 4 months ago
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"☆274Updated last year
- ICCV2025☆114Updated this week
- The official codebase for ManipLLM: Embodied Multimodal Large Language Model for Object-Centric Robotic Manipulation(cvpr 2024)☆138Updated last year
- [NeurIPS 2024] CLOVER: Closed-Loop Visuomotor Control with Generative Expectation for Robotic Manipulation☆123Updated last month