DelinQu / awesome-vision-language-action-model
Latest Advances on Vison-Language-Action Models.
β30Updated 3 weeks ago
Alternatives and similar repositories for awesome-vision-language-action-model:
Users that are interested in awesome-vision-language-action-model are comparing it to the libraries listed below
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimizationβ98Updated last week
- π₯ SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes.β181Updated last week
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Modelβ130Updated this week
- π₯CVPR2025 & ICLR2025 Embodied AI Paper List Resources. Star β the repo and follow me if you like what you see π€©.β63Updated last week
- Unified Video Action Modelβ128Updated last week
- Official repo of VLABench, a large scale benchmark designed for fairly evaluating VLA, Embodied Agent, and VLMs.β178Updated this week
- β46Updated 3 months ago
- SoFar: Language-Grounded Orientation Bridges Spatial Reasoning and Object Manipulationβ126Updated last week
- β30Updated this week
- β67Updated 6 months ago
- β59Updated last week
- [NeurIPS 2024] CLOVER: Closed-Loop Visuomotor Control with Generative Expectation for Robotic Manipulationβ106Updated 3 months ago
- Scripts for converting OpenX(rlds) dataset to LeRobot dataset.β65Updated last week
- Official code of paper "DeeR-VLA: Dynamic Inference of Multimodal Large Language Models for Efficient Robot Execution"β85Updated last month
- [CVPR 2025] The offical Implementation of "Universal Actions for Enhanced Embodied Foundation Models"β121Updated last week
- [ICLR 2025 Oral] Seer: Predictive Inverse Dynamics Models are Scalable Learners for Robotic Manipulationβ127Updated last week
- A Foundational Vision-Language-Action Model for Synergizing Cognition and Action in Robotic Manipulationβ205Updated last month
- [CoRL2024] Official repo of `A3VLM: Actionable Articulation-Aware Vision Language Model`β109Updated 5 months ago
- Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoningβ54Updated 2 months ago
- [RSS 2024] Learning Manipulation by Predicting Interactionβ101Updated 7 months ago
- Official implementation of "Data Scaling Laws in Imitation Learning for Robotic Manipulation"β155Updated 4 months ago
- ManiCM: Real-time 3D Diffusion Policy via Consistency Model for Robotic Manipulationβ102Updated 8 months ago
- [RSS 2024] Code for "Multimodal Diffusion Transformer: Learning Versatile Behavior from Multimodal Goals" for CALVIN experiments with preβ¦β123Updated 5 months ago
- Code for FLIP: Flow-Centric Generative Planning for General-Purpose Manipulation Tasksβ51Updated 3 months ago
- β43Updated this week
- The official codebase for ManipLLM: Embodied Multimodal Large Language Model for Object-Centric Robotic Manipulation(cvpr 2024)β122Updated 8 months ago
- β37Updated 4 months ago
- [ICLR 2025] LAPA: Latent Action Pretraining from Videosβ199Updated 2 months ago
- β94Updated 7 months ago
- Latent Motion Token as the Bridging Language for Robot Manipulationβ77Updated last week