Jiaaqiliu / Awesome-VLA-RoboticsView external linksLinks
A comprehensive list of excellent research papers, models, datasets, and other resources on Vision-Language-Action (VLA) models in robotics.
☆504Jan 25, 2026Updated 3 weeks ago
Alternatives and similar repositories for Awesome-VLA-Robotics
Users that are interested in Awesome-VLA-Robotics are comparing it to the libraries listed below
Sorting:
- A curated list of state-of-the-art research in embodied AI, focusing on vision-language-action (VLA) models, vision-language navigation (…☆2,550Updated this week
- ☆465Feb 4, 2026Updated 2 weeks ago
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model☆338Oct 3, 2025Updated 4 months ago
- This repository summarizes recent advances in the VLA + RL paradigm and provides a taxonomic classification of relevant works.☆387Oct 10, 2025Updated 4 months ago
- [CVPR 2025 highlight] Generating 6DoF Object Manipulation Trajectories from Action Description in Egocentric Vision☆33Dec 2, 2025Updated 2 months ago
- [RSS 2025] Learning to Act Anywhere with Task-centric Latent Actions☆990Nov 19, 2025Updated 2 months ago
- ICCV2025☆155Dec 10, 2025Updated 2 months ago
- 🎁 A collection of utilities for LeRobot.☆863Feb 7, 2026Updated last week
- Paper list in the survey: A Survey on Vision-Language-Action Models: An Action Tokenization Perspective☆445Jul 3, 2025Updated 7 months ago
- [Actively Maintained🔥] A list of Embodied AI papers accepted by top conferences (ICLR, NeurIPS, ICML, RSS, CoRL, ICRA, IROS, CVPR, ICCV,…☆477Dec 1, 2025Updated 2 months ago
- Official PyTorch implementation for ICML 2025 paper: UP-VLA.☆55Jan 20, 2026Updated 3 weeks ago
- ☆10,231Dec 27, 2025Updated last month
- Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning☆79May 17, 2025Updated 9 months ago
- An unofficial pytorch dataloader for Open X-Embodiment Datasets https://github.com/google-deepmind/open_x_embodiment☆23Jan 9, 2025Updated last year
- rmp data ranking☆13Nov 4, 2025Updated 3 months ago
- RDT-1B: a Diffusion Foundation Model for Bimanual Manipulation☆1,615Jan 21, 2026Updated 3 weeks ago
- Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Success☆1,037Sep 9, 2025Updated 5 months ago
- [ICML 2025] OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction☆115Apr 14, 2025Updated 10 months ago
- Official PyTorch Implementation of Unified Video Action Model (RSS 2025)☆332Jul 23, 2025Updated 6 months ago
- ☆68Jan 8, 2025Updated last year
- [ICLR 2025] LAPA: Latent Action Pretraining from Videos☆460Jan 22, 2025Updated last year
- [IJCAI'24] An index of algorithms, approaches, and systems on cross-domain policy transfer for embodied agents☆60Feb 14, 2025Updated last year
- Discrete Diffusion VLA: Bringing Discrete Diffusion to Action Decoding in Vision-Language-Action Policies☆55Dec 3, 2025Updated 2 months ago
- Building General-Purpose Robots Based on Embodied Foundation Model☆768Feb 11, 2026Updated last week
- [CoRL25] GraspVLA: a Grasping Foundation Model Pre-trained on Billion-scale Synthetic Action Data☆334Dec 29, 2025Updated last month
- onsite structured testing tool, which provides three testing capabilities: playback testing, fragmented two-way interactive testing and c…☆30Apr 16, 2025Updated 10 months ago
- OpenHelix: An Open-source Dual-System VLA Model for Robotic Manipulation☆343Aug 27, 2025Updated 5 months ago
- [Embodied-AI-Survey-2025] Paper List and Resource Repository for Embodied AI☆1,907Dec 17, 2025Updated 2 months ago
- A curated collection of papers on E2E-AD, aimed at researchers, engineers, and enthusiasts in the field of autonomous driving systems. Th…☆91Jan 18, 2026Updated last month
- 🔥 SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.☆651Jun 23, 2025Updated 7 months ago
- Benchmarking Knowledge Transfer in Lifelong Robot Learning☆1,485Mar 15, 2025Updated 11 months ago
- [Lumina Embodied AI] 具身智能技术指南 Embodied-AI-Guide☆11,884Jan 15, 2026Updated last month
- Official implementation of paper "Data-Agnostic Robotic Long-Horizon Manipulation with Vision-Language-Conditioned Closed-Loop Feedback"☆18Apr 10, 2025Updated 10 months ago
- Official implementation of "OneTwoVLA: A Unified Vision-Language-Action Model with Adaptive Reasoning"☆208May 30, 2025Updated 8 months ago
- A comprehensive list of papers about dual-system VLA models, including papers, codes, and related websites.☆104Nov 21, 2025Updated 2 months ago
- OpenVLA: An open-source vision-language-action model for robotic manipulation.☆5,251Mar 23, 2025Updated 10 months ago
- Official code of RDT 2☆708Feb 7, 2026Updated last week
- StarVLA: A Lego-like Codebase for Vision-Language-Action Model Developing☆1,098Feb 11, 2026Updated last week
- RoboVerse: Towards a Unified Platform, Dataset and Benchmark for Scalable and Generalizable Robot Learning☆1,669Updated this week