Psi-Robot / Awesome-VLA-PapersLinks
Paper list in the survey: A Survey on Vision-Language-Action Models: An Action Tokenization Perspective
β233Updated 2 months ago
Alternatives and similar repositories for Awesome-VLA-Papers
Users that are interested in Awesome-VLA-Papers are comparing it to the libraries listed below
Sorting:
- π₯ SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.β482Updated 2 months ago
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Modelβ283Updated 3 months ago
- WorldVLA: Towards Autoregressive Action World Modelβ384Updated 2 weeks ago
- A curated list of large VLM-based VLA models for robotic manipulation.β133Updated last week
- β394Updated 7 months ago
- Single-file implementation to advance vision-language-action (VLA) models with reinforcement learning.β274Updated this week
- [Actively Maintainedπ₯] A list of Embodied AI papers accepted by top conferences (ICLR, NeurIPS, ICML, RSS, CoRL, ICRA, IROS, CVPR, ICCV,β¦β364Updated last month
- β252Updated last week
- OpenHelix: An Open-source Dual-System VLA Model for Robotic Manipulationβ259Updated 2 weeks ago
- [CVPR 2025] The offical Implementation of "Universal Actions for Enhanced Embodied Foundation Models"β199Updated 5 months ago
- β302Updated 5 months ago
- Video Prediction Policy: A Generalist Robot Policy with Predictive Visual Representations https://video-prediction-policy.github.ioβ261Updated 3 months ago
- The official repo for "SpatialBot: Precise Spatial Understanding with Vision Language Models.β305Updated this week
- [RSS 2025] Learning to Act Anywhere with Task-centric Latent Actionsβ723Updated 3 weeks ago
- Online RL with Simple Reward Enables Training VLA Models with Only One Trajectoryβ389Updated 2 months ago
- A Foundational Vision-Language-Action Model for Synergizing Cognition and Action in Robotic Manipulationβ341Updated 3 months ago
- Latest Advances on Vison-Language-Action Models.β106Updated 6 months ago
- [ICLR 2025] LAPA: Latent Action Pretraining from Videosβ369Updated 7 months ago
- OpenVLA: An open-source vision-language-action model for robotic manipulation.β254Updated 5 months ago
- Official PyTorch Implementation of Unified Video Action Model (RSS 2025)β261Updated last month
- DexGraspVLA: A Vision-Language-Action Framework Towards General Dexterous Graspingβ366Updated last month
- GraspVLA: a Grasping Foundation Model Pre-trained on Billion-scale Synthetic Action Dataβ224Updated last month
- [ICML 2024] 3D-VLA: A 3D Vision-Language-Action Generative World Modelβ561Updated 10 months ago
- SoFar: Language-Grounded Orientation Bridges Spatial Reasoning and Object Manipulationβ189Updated 2 months ago
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"β277Updated last year
- Pytorch PI-zero and PI-zero-fast. Adapted from LeRobotβ119Updated 2 weeks ago
- Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Successβ661Updated last week
- Galaxea's first VLA releaseβ215Updated this week
- Official Code For VLA-OS.β105Updated 2 months ago
- ICCV2025β125Updated 3 weeks ago