JiuTian-VL / Large-VLM-based-VLA-for-Robotic-ManipulationLinks
A curated list of large VLM-based VLA models for robotic manipulation.
☆238Updated last week
Alternatives and similar repositories for Large-VLM-based-VLA-for-Robotic-Manipulation
Users that are interested in Large-VLM-based-VLA-for-Robotic-Manipulation are comparing it to the libraries listed below
Sorting:
- Single-file implementation to advance vision-language-action (VLA) models with reinforcement learning.☆333Updated last week
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model☆315Updated last month
- InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policy☆257Updated this week
- [NeurIPS 2025 Spotlight] SoFar: Language-Grounded Orientation Bridges Spatial Reasoning and Object Manipulation☆201Updated 4 months ago
- 🔥 SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.☆565Updated 4 months ago
- [Actively Maintained🔥] A list of Embodied AI papers accepted by top conferences (ICLR, NeurIPS, ICML, RSS, CoRL, ICRA, IROS, CVPR, ICCV,…☆405Updated 2 weeks ago
- GraspVLA: a Grasping Foundation Model Pre-trained on Billion-scale Synthetic Action Data☆273Updated 3 months ago
- Official implementation of "OneTwoVLA: A Unified Vision-Language-Action Model with Adaptive Reasoning"☆194Updated 5 months ago
- StarVLA: A Lego-like Codebase for Vision-Language-Action Model Developing☆433Updated this week
- ☆409Updated 9 months ago
- This repository summarizes recent advances in the VLA + RL paradigm and provides a taxonomic classification of relevant works.☆324Updated last month
- A Foundational Vision-Language-Action Model for Synergizing Cognition and Action in Robotic Manipulation☆374Updated 2 weeks ago
- [ICLR 2025 Oral] Seer: Predictive Inverse Dynamics Models are Scalable Learners for Robotic Manipulation☆255Updated 4 months ago
- Galaxea's first VLA release☆312Updated 3 weeks ago
- Official Code For VLA-OS.☆121Updated 4 months ago
- Official PyTorch Implementation of Unified Video Action Model (RSS 2025)☆289Updated 3 months ago
- OpenHelix: An Open-source Dual-System VLA Model for Robotic Manipulation☆319Updated 2 months ago
- ICCV2025☆141Updated 2 months ago
- Official repo of VLABench, a large scale benchmark designed for fairly evaluating VLA, Embodied Agent, and VLMs.☆327Updated this week
- ☆344Updated 2 weeks ago
- ✨✨【NeurIPS 2025】Official implementation of BridgeVLA☆154Updated last month
- The offical Implementation of "Soft-Prompted Transformer as Scalable Cross-Embodiment Vision-Language-Action Model"☆198Updated this week
- [CVPR 2025] The offical Implementation of "Universal Actions for Enhanced Embodied Foundation Models"☆211Updated last week
- Pytorch PI-zero and PI-zero-fast. Adapted from LeRobot☆145Updated 2 months ago
- [NeurIPS 2025] DreamVLA: A Vision-Language-Action Model Dreamed with Comprehensive World Knowledge☆224Updated last month
- An All-in-one robot manipulation learning suite for policy models training and evaluation on various datasets and benchmarks.☆157Updated last month
- LLaVA-VLA: A Simple Yet Powerful Vision-Language-Action Model [Actively Maintained🔥]☆172Updated 2 weeks ago
- ☆309Updated last week
- Paper list in the survey: A Survey on Vision-Language-Action Models: An Action Tokenization Perspective☆320Updated 4 months ago
- [ICLR 2025] LAPA: Latent Action Pretraining from Videos☆400Updated 9 months ago