DelinQu / awesome-vision-language-action-modelLinks
Latest Advances on Vison-Language-Action Models.
β88Updated 5 months ago
Alternatives and similar repositories for awesome-vision-language-action-model
Users that are interested in awesome-vision-language-action-model are comparing it to the libraries listed below
Sorting:
- [ICLR 2025] LAPA: Latent Action Pretraining from Videosβ341Updated 6 months ago
- π₯ SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.β415Updated last month
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Modelβ266Updated last month
- β378Updated 6 months ago
- A Foundational Vision-Language-Action Model for Synergizing Cognition and Action in Robotic Manipulationβ311Updated 2 months ago
- Embodied Chain of Thought: A robotic policy that reason to solve the task.β284Updated 4 months ago
- β273Updated 3 months ago
- OpenHelix: An Open-source Dual-System VLA Model for Robotic Manipulationβ230Updated 2 months ago
- WorldVLA: Towards Autoregressive Action World Modelβ310Updated last month
- Embodied Reasoning Question Answer (ERQA) Benchmarkβ191Updated 4 months ago
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimizationβ135Updated 4 months ago
- [CVPR 2025] The offical Implementation of "Universal Actions for Enhanced Embodied Foundation Models"β191Updated 4 months ago
- Official code of paper "DeeR-VLA: Dynamic Inference of Multimodal Large Language Models for Efficient Robot Execution"β100Updated 5 months ago
- Official repo of VLABench, a large scale benchmark designed for fairly evaluating VLA, Embodied Agent, and VLMs.β269Updated last month
- Official implementation of "OneTwoVLA: A Unified Vision-Language-Action Model with Adaptive Reasoning"β163Updated 2 months ago
- OpenVLA: An open-source vision-language-action model for robotic manipulation.β228Updated 4 months ago
- Official Code For VLA-OS.β78Updated last month
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"β270Updated last year
- Online RL with Simple Reward Enables Training VLA Models with Only One Trajectoryβ324Updated last month
- This repository compiles a list of papers related to the application of video technology in the field of robotics! Starβ the repo and folβ¦β167Updated 6 months ago
- β64Updated 5 months ago
- The official repo for "SpatialBot: Precise Spatial Understanding with Vision Language Models.β290Updated 2 months ago
- LLaVA-VLA: A Simple Yet Powerful Vision-Language-Action Model [Actively Maintainedπ₯]β120Updated this week
- Unified Vision-Language-Action Modelβ170Updated 2 weeks ago
- The repo of paper `RoboMamba: Multimodal State Space Model for Efficient Robot Reasoning and Manipulation`β129Updated 7 months ago
- Official PyTorch Implementation of Unified Video Action Model (RSS 2025)β253Updated 2 weeks ago
- Single-file implementation to advance vision-language-action (VLA) models with reinforcement learning.β190Updated 2 weeks ago
- SoFar: Language-Grounded Orientation Bridges Spatial Reasoning and Object Manipulationβ185Updated last month
- [RSS 2025] Learning to Act Anywhere with Task-centric Latent Actionsβ628Updated this week
- [ICLR'25] LLaRA: Supercharging Robot Learning Data for Vision-Language Policyβ220Updated 4 months ago