Psi-Robot / Awesome-VLA-PapersLinks
Paper list in the survey: A Survey on Vision-Language-Action Models: An Action Tokenization Perspective
β383Updated 5 months ago
Alternatives and similar repositories for Awesome-VLA-Papers
Users that are interested in Awesome-VLA-Papers are comparing it to the libraries listed below
Sorting:
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Modelβ330Updated 2 months ago
- π₯ SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.β600Updated 6 months ago
- StarVLA: A Lego-like Codebase for Vision-Language-Action Model Developingβ629Updated this week
- β417Updated 3 weeks ago
- A curated list of large VLM-based VLA models for robotic manipulation.β288Updated last week
- Dexbotic: Open-Source Vision-Language-Action Toolboxβ615Updated this week
- OpenHelix: An Open-source Dual-System VLA Model for Robotic Manipulationβ331Updated 4 months ago
- Single-file implementation to advance vision-language-action (VLA) models with reinforcement learning.β370Updated last month
- [Actively Maintainedπ₯] A list of Embodied AI papers accepted by top conferences (ICLR, NeurIPS, ICML, RSS, CoRL, ICRA, IROS, CVPR, ICCV,β¦β441Updated 3 weeks ago
- A Foundational Vision-Language-Action Model for Synergizing Cognition and Action in Robotic Manipulationβ393Updated last month
- β405Updated last week
- [RSS 2025] Learning to Act Anywhere with Task-centric Latent Actionsβ897Updated last month
- Galaxea's first VLA releaseβ330Updated 2 months ago
- The offical Implementation of "Soft-Prompted Transformer as Scalable Cross-Embodiment Vision-Language-Action Model"β378Updated 3 weeks ago
- Building General-Purpose Robots Based on Embodied Foundation Modelβ638Updated 2 weeks ago
- [CVPR 2025] The offical Implementation of "Universal Actions for Enhanced Embodied Foundation Models"β220Updated last month
- [AAAI'26 Oral] DexGraspVLA: A Vision-Language-Action Framework Towards General Dexterous Graspingβ451Updated 4 months ago
- Evo-1: Lightweight Vision-Language-Action Model with Preserved Semantic Alignmentβ192Updated last week
- RynnVLA-002: A Unified Vision-Language-Action and World Modelβ790Updated 3 weeks ago
- Official code of RDT 2β605Updated 3 weeks ago
- SimpleVLA-RL: Scaling VLA Training via Reinforcement Learningβ1,126Updated 2 months ago
- GraspVLA: a Grasping Foundation Model Pre-trained on Billion-scale Synthetic Action Dataβ300Updated 5 months ago
- InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policyβ319Updated last week
- [ICLR 2025] LAPA: Latent Action Pretraining from Videosβ427Updated 11 months ago
- [NeurIPS 2025 Spotlight] SoFar: Language-Grounded Orientation Bridges Spatial Reasoning and Object Manipulationβ215Updated 5 months ago
- RynnVLA-001: Using Human Demonstrations to Improve Robot Manipulationβ271Updated 3 weeks ago
- β342Updated this week
- Video Prediction Policy: A Generalist Robot Policy with Predictive Visual Representations https://video-prediction-policy.github.ioβ314Updated 7 months ago
- [NeurIPS 2025] DreamVLA: A Vision-Language-Action Model Dreamed with Comprehensive World Knowledgeβ259Updated 3 months ago
- The official repo for "SpatialBot: Precise Spatial Understanding with Vision Language Models.β326Updated 3 months ago