OpenHelix-Team / LLaVA-VLALinks
LLaVA-VLA: A Simple Yet Powerful Vision-Language-Action Model [Actively Maintainedπ₯]
β134Updated this week
Alternatives and similar repositories for LLaVA-VLA
Users that are interested in LLaVA-VLA are comparing it to the libraries listed below
Sorting:
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Modelβ277Updated 2 months ago
- DreamVLA: A Vision-Language-Action Model Dreamed with Comprehensive World Knowledgeβ158Updated this week
- WorldVLA: Towards Autoregressive Action World Modelβ363Updated last month
- Official implementation of "RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics"β138Updated last month
- SoFar: Language-Grounded Orientation Bridges Spatial Reasoning and Object Manipulationβ188Updated 2 months ago
- Unified Vision-Language-Action Modelβ181Updated last month
- β81Updated 3 months ago
- ICCV2025β114Updated this week
- The official repo for "SpatialBot: Precise Spatial Understanding with Vision Language Models.β297Updated 3 months ago
- Official code of paper "DeeR-VLA: Dynamic Inference of Multimodal Large Language Models for Efficient Robot Execution"β104Updated 6 months ago
- π¦Ύ A Dual-System VLA with System2 Thinkingβ92Updated last week
- Single-file implementation to advance vision-language-action (VLA) models with reinforcement learning.β240Updated last month
- Official Code For VLA-OS.β94Updated 2 months ago
- A curated list of large VLM-based VLA models for robotic manipulation.β84Updated this week
- π₯ SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.β460Updated 2 months ago
- Official implementation of "OneTwoVLA: A Unified Vision-Language-Action Model with Adaptive Reasoning"β178Updated 3 months ago
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimizationβ138Updated 4 months ago
- The Official Implementation of RoboMatrixβ96Updated 3 months ago
- OpenHelix: An Open-source Dual-System VLA Model for Robotic Manipulationβ254Updated this week
- β129Updated this week
- The repo of paper `RoboMamba: Multimodal State Space Model for Efficient Robot Reasoning and Manipulation`β132Updated 8 months ago
- Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoningβ73Updated 3 months ago
- Online RL with Simple Reward Enables Training VLA Models with Only One Trajectoryβ374Updated 2 months ago
- Official implementation of the paper: "StreamVLN: Streaming Vision-and-Language Navigation via SlowFast Context Modeling"β192Updated last week
- Latest Advances on Vison-Language-Action Models.β99Updated 5 months ago
- Paper list in the survey: A Survey on Vision-Language-Action Models: An Action Tokenization Perspectiveβ214Updated last month
- [NeurIPS 2024] CLOVER: Closed-Loop Visuomotor Control with Generative Expectation for Robotic Manipulationβ123Updated last month
- [CVPR 2025]Lift3D Foundation Policy: Lifting 2D Large-Scale Pretrained Models for Robust 3D Robotic Manipulationβ155Updated 2 months ago
- A comprehensive list of papers about dual-system VLA models, including papers, codes, and related websites.β67Updated last month
- β386Updated 7 months ago