yueen-ma / Awesome-VLALinks
β328Updated this week
Alternatives and similar repositories for Awesome-VLA
Users that are interested in Awesome-VLA are comparing it to the libraries listed below
Sorting:
- [Actively Maintainedπ₯] A list of Embodied AI papers accepted by top conferences (ICLR, NeurIPS, ICML, RSS, CoRL, ICRA, IROS, CVPR, ICCV,β¦β397Updated last month
- SimpleVLA-RL: Scaling VLA Training via Reinforcement Learningβ894Updated 2 weeks ago
- OpenHelix: An Open-source Dual-System VLA Model for Robotic Manipulationβ307Updated 2 months ago
- Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Successβ780Updated last month
- This repository summarizes recent advances in the VLA + RL paradigm and provides a taxonomic classification of relevant works.β307Updated 3 weeks ago
- Single-file implementation to advance vision-language-action (VLA) models with reinforcement learning.β319Updated last month
- π₯ SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.β553Updated 4 months ago
- β403Updated 9 months ago
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Modelβ304Updated 3 weeks ago
- It's not a list of papers, but a list of paper reading lists...β230Updated 6 months ago
- A Foundational Vision-Language-Action Model for Synergizing Cognition and Action in Robotic Manipulationβ364Updated this week
- Embodied Chain of Thought: A robotic policy that reason to solve the task.β312Updated 6 months ago
- [RSS 2025] Learning to Act Anywhere with Task-centric Latent Actionsβ805Updated 2 months ago
- A curated list of large VLM-based VLA models for robotic manipulation.β224Updated 3 weeks ago
- starVLA: A Lego-like Codebase for Vision-Language-Action Model Developingβ323Updated this week
- Official repo of VLABench, a large scale benchmark designed for fairly evaluating VLA, Embodied Agent, and VLMs.β316Updated 2 months ago
- Evaluating and reproducing real-world robot manipulation policies (e.g., RT-1, RT-1-X, Octo, and OpenVLA) in simulation under common setuβ¦β223Updated 4 months ago
- DexGraspVLA: A Vision-Language-Action Framework Towards General Dexterous Graspingβ402Updated 2 months ago
- RoboScholar: A Comprehensive Paper List of Embodied AI and Robotics Researchβ165Updated 3 weeks ago
- β184Updated 2 months ago
- Latest Advances on Vison-Language-Action Models.β116Updated 7 months ago
- A paper list of my history reading. Robotics, Learning, Vision.β457Updated last week
- OpenVLA: An open-source vision-language-action model for robotic manipulation.β274Updated 7 months ago
- Evaluating and reproducing real-world robot manipulation policies (e.g., RT-1, RT-1-X, Octo) in simulation under common setups (e.g., Gooβ¦β804Updated 7 months ago
- Benchmarking Knowledge Transfer in Lifelong Robot Learningβ1,026Updated 7 months ago
- Paper list in the survey: A Survey on Vision-Language-Action Models: An Action Tokenization Perspectiveβ296Updated 3 months ago
- A comprehensive list of papers about Robot Manipulation, including papers, codes, and related websites.β598Updated this week
- [ICLR 2025] LAPA: Latent Action Pretraining from Videosβ394Updated 9 months ago
- This is the official implementation of the paper "ConRFT: A Reinforced Fine-tuning Method for VLA Models via Consistency Policy".β282Updated 4 months ago
- Galaxea's first VLA releaseβ302Updated last week