DelinQu / awesome-vision-language-action-modelLinks
Latest Advances on Vison-Language-Action Models.
☆116Updated 7 months ago
Alternatives and similar repositories for awesome-vision-language-action-model
Users that are interested in awesome-vision-language-action-model are comparing it to the libraries listed below
Sorting:
- Embodied Chain of Thought: A robotic policy that reason to solve the task.☆312Updated 6 months ago
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model☆304Updated 3 weeks ago
- OpenVLA: An open-source vision-language-action model for robotic manipulation.☆274Updated 7 months ago
- Official code of paper "DeeR-VLA: Dynamic Inference of Multimodal Large Language Models for Efficient Robot Execution"☆114Updated 8 months ago
- Single-file implementation to advance vision-language-action (VLA) models with reinforcement learning.☆313Updated last month
- [ICLR 2025] LAPA: Latent Action Pretraining from Videos☆387Updated 9 months ago
- OpenHelix: An Open-source Dual-System VLA Model for Robotic Manipulation☆307Updated 2 months ago
- ☆328Updated this week
- A curated list of large VLM-based VLA models for robotic manipulation.☆224Updated 3 weeks ago
- InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policy☆191Updated last week
- WorldVLA: Towards Autoregressive Action World Model☆472Updated 2 weeks ago
- Paper list in the survey: A Survey on Vision-Language-Action Models: An Action Tokenization Perspective☆296Updated 3 months ago
- Official implementation of "OneTwoVLA: A Unified Vision-Language-Action Model with Adaptive Reasoning"☆191Updated 4 months ago
- A Foundational Vision-Language-Action Model for Synergizing Cognition and Action in Robotic Manipulation☆362Updated 5 months ago
- 🔥 SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.☆553Updated 4 months ago
- ☆403Updated 9 months ago
- Official repo of VLABench, a large scale benchmark designed for fairly evaluating VLA, Embodied Agent, and VLMs.☆310Updated 2 months ago
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆144Updated 6 months ago
- VLAC: A Vision-Language-Action-Critic Model for Robotic Real-World Reinforcement Learning☆196Updated last month
- [ICML 2025] OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction☆107Updated 6 months ago
- [CVPR 2025] The offical Implementation of "Universal Actions for Enhanced Embodied Foundation Models"☆207Updated 7 months ago
- The repo of paper `RoboMamba: Multimodal State Space Model for Efficient Robot Reasoning and Manipulation`☆137Updated 10 months ago
- ICCV2025☆139Updated 2 months ago
- LLaVA-VLA: A Simple Yet Powerful Vision-Language-Action Model [Actively Maintained🔥]☆162Updated last month
- [NeurIPS 2025] DreamVLA: A Vision-Language-Action Model Dreamed with Comprehensive World Knowledge☆203Updated last month
- Official Code For VLA-OS.☆116Updated 4 months ago
- starVLA: A Lego-like Codebase for Vision-Language-Action Model Developing☆203Updated last week
- This repository compiles a list of papers related to the application of video technology in the field of robotics! Star⭐ the repo and fol…☆167Updated 8 months ago
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"☆285Updated last year
- Galaxea's first VLA release☆288Updated this week