Psi-Robot / Awesome-VLA-PapersLinks
Paper list in the survey: A Survey on Vision-Language-Action Models: An Action Tokenization Perspective
β296Updated 3 months ago
Alternatives and similar repositories for Awesome-VLA-Papers
Users that are interested in Awesome-VLA-Papers are comparing it to the libraries listed below
Sorting:
- π₯ SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.β541Updated 4 months ago
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Modelβ297Updated 3 weeks ago
- A curated list of large VLM-based VLA models for robotic manipulation.β224Updated 3 weeks ago
- WorldVLA: Towards Autoregressive Action World Modelβ472Updated 2 weeks ago
- β403Updated 9 months ago
- [CVPR 2025] The offical Implementation of "Universal Actions for Enhanced Embodied Foundation Models"β204Updated 7 months ago
- Single-file implementation to advance vision-language-action (VLA) models with reinforcement learning.β313Updated last month
- OpenHelix: An Open-source Dual-System VLA Model for Robotic Manipulationβ307Updated 2 months ago
- β291Updated 2 weeks ago
- A Foundational Vision-Language-Action Model for Synergizing Cognition and Action in Robotic Manipulationβ362Updated 5 months ago
- Video Prediction Policy: A Generalist Robot Policy with Predictive Visual Representations https://video-prediction-policy.github.ioβ283Updated 5 months ago
- GraspVLA: a Grasping Foundation Model Pre-trained on Billion-scale Synthetic Action Dataβ258Updated 3 months ago
- OpenVLA: An open-source vision-language-action model for robotic manipulation.β271Updated 7 months ago
- The official repo for "SpatialBot: Precise Spatial Understanding with Vision Language Models.β312Updated last month
- Building General-Purpose Robots Based on Embodied Foundation Modelβ554Updated this week
- Official code of RDT 2β544Updated 2 weeks ago
- [Actively Maintainedπ₯] A list of Embodied AI papers accepted by top conferences (ICLR, NeurIPS, ICML, RSS, CoRL, ICRA, IROS, CVPR, ICCV,β¦β390Updated 3 weeks ago
- Dexbotic: Open-Source Vision-Language-Action Toolboxβ210Updated last week
- Galaxea's first VLA releaseβ288Updated this week
- [NeurIPS 2025 Spotlight] SoFar: Language-Grounded Orientation Bridges Spatial Reasoning and Object Manipulationβ197Updated 3 months ago
- RynnVLA-001: Using Human Demonstrations to Improve Robot Manipulationβ240Updated last month
- [RSS 2025] Learning to Act Anywhere with Task-centric Latent Actionsβ790Updated 2 months ago
- InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policyβ191Updated last week
- [ICLR 2025] LAPA: Latent Action Pretraining from Videosβ387Updated 9 months ago
- β325Updated 6 months ago
- β234Updated 7 months ago
- [NeurIPS 2025] DreamVLA: A Vision-Language-Action Model Dreamed with Comprehensive World Knowledgeβ203Updated last month
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"β285Updated last year
- Official repo of VLABench, a large scale benchmark designed for fairly evaluating VLA, Embodied Agent, and VLMs.β310Updated 2 months ago
- Official PyTorch Implementation of Unified Video Action Model (RSS 2025)β278Updated 3 months ago