Psi-Robot / Awesome-VLA-PapersLinks
Paper list in the survey: A Survey on Vision-Language-Action Models: An Action Tokenization Perspective
☆408Updated 6 months ago
Alternatives and similar repositories for Awesome-VLA-Papers
Users that are interested in Awesome-VLA-Papers are comparing it to the libraries listed below
Sorting:
- StarVLA: A Lego-like Codebase for Vision-Language-Action Model Developing☆806Updated this week
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model☆334Updated 3 months ago
- 🔥 SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.☆625Updated 6 months ago
- ☆421Updated last month
- Dexbotic: Open-Source Vision-Language-Action Toolbox☆655Updated this week
- A curated list of large VLM-based VLA models for robotic manipulation.☆313Updated 3 weeks ago
- ☆434Updated 3 weeks ago
- Galaxea's first VLA release☆488Updated last week
- The offical Implementation of "Soft-Prompted Transformer as Scalable Cross-Embodiment Vision-Language-Action Model"☆418Updated last week
- OpenHelix: An Open-source Dual-System VLA Model for Robotic Manipulation☆337Updated 4 months ago
- Evo-1: Lightweight Vision-Language-Action Model with Preserved Semantic Alignment☆199Updated last month
- Single-file implementation to advance vision-language-action (VLA) models with reinforcement learning.☆382Updated 2 months ago
- Building General-Purpose Robots Based on Embodied Foundation Model☆706Updated last month
- OpenVLA: An open-source vision-language-action model for robotic manipulation.☆324Updated 9 months ago
- [RSS 2025] Learning to Act Anywhere with Task-centric Latent Actions☆944Updated last month
- [Actively Maintained🔥] A list of Embodied AI papers accepted by top conferences (ICLR, NeurIPS, ICML, RSS, CoRL, ICRA, IROS, CVPR, ICCV,…☆452Updated last month
- A Foundational Vision-Language-Action Model for Synergizing Cognition and Action in Robotic Manipulation☆397Updated 2 months ago
- [ICLR 2025] LAPA: Latent Action Pretraining from Videos☆438Updated 11 months ago
- Spirit-v1.5: A Robotic Foundation Model by Spirit AI☆280Updated this week
- Official code of RDT 2☆624Updated last month
- InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policy☆335Updated last week
- ☆28Updated 10 months ago
- Latest Advances on Vison-Language-Action Models.☆124Updated 10 months ago
- [CVPR 2025] The offical Implementation of "Universal Actions for Enhanced Embodied Foundation Models"☆224Updated 2 months ago
- Running VLA at 30Hz frame rate and 480Hz trajectory frequency☆352Updated 2 weeks ago
- RynnVLA-002: A Unified Vision-Language-Action and World Model☆852Updated last month
- RynnVLA-001: Using Human Demonstrations to Improve Robot Manipulation☆275Updated last month
- LLaVA-VLA: A Simple Yet Powerful Vision-Language-Action Model [Actively Maintained🔥]☆173Updated 2 months ago
- Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Success☆963Updated 4 months ago
- Official repo of VLABench, a large scale benchmark designed for fairly evaluating VLA, Embodied Agent, and VLMs.☆365Updated 2 months ago