PKU-HMI-Lab / Hybrid-VLALinks
HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model
☆330Updated 2 months ago
Alternatives and similar repositories for Hybrid-VLA
Users that are interested in Hybrid-VLA are comparing it to the libraries listed below
Sorting:
- Single-file implementation to advance vision-language-action (VLA) models with reinforcement learning.☆365Updated last month
- InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policy☆312Updated last week
- [NeurIPS 2025] DreamVLA: A Vision-Language-Action Model Dreamed with Comprehensive World Knowledge☆259Updated 3 months ago
- Official Code For VLA-OS.☆131Updated 6 months ago
- ICCV2025☆145Updated 2 weeks ago
- The offical Implementation of "Soft-Prompted Transformer as Scalable Cross-Embodiment Vision-Language-Action Model"☆378Updated 3 weeks ago
- Galaxea's first VLA release☆323Updated 2 months ago
- StarVLA: A Lego-like Codebase for Vision-Language-Action Model Developing☆595Updated this week
- [ICLR 2025 Oral] Seer: Predictive Inverse Dynamics Models are Scalable Learners for Robotic Manipulation☆270Updated 5 months ago
- OpenHelix: An Open-source Dual-System VLA Model for Robotic Manipulation☆331Updated 3 months ago
- [NeurIPS 2025 Spotlight] SoFar: Language-Grounded Orientation Bridges Spatial Reasoning and Object Manipulation☆215Updated 5 months ago
- A curated list of large VLM-based VLA models for robotic manipulation.☆288Updated this week
- Official implementation of "OneTwoVLA: A Unified Vision-Language-Action Model with Adaptive Reasoning"☆205Updated 6 months ago
- 🔥 SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.☆600Updated 6 months ago
- [CVPR 2025] The offical Implementation of "Universal Actions for Enhanced Embodied Foundation Models"☆220Updated last month
- Official repo of VLABench, a large scale benchmark designed for fairly evaluating VLA, Embodied Agent, and VLMs.☆344Updated last month
- Unified Vision-Language-Action Model☆256Updated 2 months ago
- ☆18Updated 9 months ago
- ☆212Updated 3 months ago
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆153Updated 8 months ago
- LLaVA-VLA: A Simple Yet Powerful Vision-Language-Action Model [Actively Maintained🔥]☆173Updated last month
- ☆416Updated 3 weeks ago
- GraspVLA: a Grasping Foundation Model Pre-trained on Billion-scale Synthetic Action Data☆300Updated 4 months ago
- A Foundational Vision-Language-Action Model for Synergizing Cognition and Action in Robotic Manipulation☆388Updated last month
- This is the official implementation of the paper "ConRFT: A Reinforced Fine-tuning Method for VLA Models via Consistency Policy".☆303Updated last month
- This repository summarizes recent advances in the VLA + RL paradigm and provides a taxonomic classification of relevant works.☆373Updated 2 months ago
- F1: A Vision Language Action Model Bridging Understanding and Generation to Actions☆145Updated 2 months ago
- Evo-1: Lightweight Vision-Language-Action Model with Preserved Semantic Alignment☆186Updated last week
- Official PyTorch Implementation of Unified Video Action Model (RSS 2025)☆309Updated 5 months ago
- Video Prediction Policy: A Generalist Robot Policy with Predictive Visual Representations https://video-prediction-policy.github.io☆314Updated 7 months ago