Jiaaqiliu / Awesome-VLA-Robotics
A comprehensive list of excellent research papers, models, datasets, and other resources on Vision-Language-Action (VLA) models in robotics.
β89Updated this week
Alternatives and similar repositories for Awesome-VLA-Robotics:
Users that are interested in Awesome-VLA-Robotics are comparing it to the libraries listed below
- β102Updated 3 weeks ago
- π₯RSS2025 & CVPR2025 & ICLR2025 Embodied AI Paper List Resources. Star β the repo and follow me if you like what you see π€©.β247Updated last week
- RoboDual: Dual-System for Robotic Manipulationβ71Updated last week
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Modelβ188Updated last week
- Official repo of VLABench, a large scale benchmark designed for fairly evaluating VLA, Embodied Agent, and VLMs.β215Updated last week
- [ICLR 2025 Oral] Seer: Predictive Inverse Dynamics Models are Scalable Learners for Robotic Manipulationβ169Updated 2 weeks ago
- [Lumina Embodied AI Community] A paper list for Embodied AI / Roboticsβ97Updated this week
- [NeurIPS 2024] CLOVER: Closed-Loop Visuomotor Control with Generative Expectation for Robotic Manipulationβ113Updated 5 months ago
- The Official Implementation of RoboMatrixβ90Updated 4 months ago
- Code for Reinforcement Learning from Vision Language Foundation Model Feedbackβ102Updated 11 months ago
- β107Updated last month
- Embodied Chain of Thought: A robotic policy that reason to solve the task.β232Updated last month
- This is the official implementation of the paper "ConRFT: A Reinforced Fine-tuning Method for VLA Models via Consistency Policy".β68Updated 3 weeks ago
- [RSS 2024] Learning Manipulation by Predicting Interactionβ106Updated 8 months ago
- [CVPR 2025] The offical Implementation of "Universal Actions for Enhanced Embodied Foundation Models"β145Updated last month
- Official implementation of paper on Nature Machine Intelligence: "Preserving and Combining Knowledge in Robotic Lifelong Reinforcement Leβ¦β70Updated last month
- Vision-Language Navigation Benchmark in Isaac Labβ157Updated last month
- β56Updated last week
- SoFar: Language-Grounded Orientation Bridges Spatial Reasoning and Object Manipulationβ147Updated last month
- [RSS 2024] Code for "Multimodal Diffusion Transformer: Learning Versatile Behavior from Multimodal Goals" for CALVIN experiments with preβ¦β132Updated 6 months ago
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimizationβ113Updated last month
- DexGraspVLA: A Vision-Language-Action Framework Towards General Dexterous Graspingβ225Updated last week
- The official codebase for ManipLLM: Embodied Multimodal Large Language Model for Object-Centric Robotic Manipulation(cvpr 2024)β131Updated 10 months ago
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"β246Updated last year
- Official code of paper "DeeR-VLA: Dynamic Inference of Multimodal Large Language Models for Efficient Robot Execution"β91Updated 2 months ago
- Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Successβ363Updated last week
- π₯ SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.β265Updated last week
- β136Updated last month
- β75Updated last week
- A comprehensive list of papers for the definition of World Models and using World Models for General Video Generation, Embodied AI, and Aβ¦β105Updated last week