alibaba-damo-academy / WorldVLALinks
WorldVLA: Towards Autoregressive Action World Model
β409Updated 3 weeks ago
Alternatives and similar repositories for WorldVLA
Users that are interested in WorldVLA are comparing it to the libraries listed below
Sorting:
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Modelβ286Updated 3 months ago
- π₯ SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.β490Updated 2 months ago
- LLaVA-VLA: A Simple Yet Powerful Vision-Language-Action Model [Actively Maintainedπ₯]β145Updated 2 weeks ago
- Video Prediction Policy: A Generalist Robot Policy with Predictive Visual Representations https://video-prediction-policy.github.ioβ261Updated 4 months ago
- [ICLR 2025] LAPA: Latent Action Pretraining from Videosβ371Updated 7 months ago
- β394Updated 7 months ago
- Unified Vision-Language-Action Modelβ193Updated 2 months ago
- [CVPR 2025] The offical Implementation of "Universal Actions for Enhanced Embodied Foundation Models"β199Updated 5 months ago
- DreamVLA: A Vision-Language-Action Model Dreamed with Comprehensive World Knowledgeβ175Updated 2 weeks ago
- The official repo for "SpatialBot: Precise Spatial Understanding with Vision Language Models.β305Updated last week
- Paper list in the survey: A Survey on Vision-Language-Action Models: An Action Tokenization Perspectiveβ251Updated 2 months ago
- OpenHelix: An Open-source Dual-System VLA Model for Robotic Manipulationβ267Updated 3 weeks ago
- [RSS 2025] Learning to Act Anywhere with Task-centric Latent Actionsβ736Updated last month
- Single-file implementation to advance vision-language-action (VLA) models with reinforcement learning.β281Updated last week
- A curated list of large VLM-based VLA models for robotic manipulation.β159Updated last week
- A Foundational Vision-Language-Action Model for Synergizing Cognition and Action in Robotic Manipulationβ344Updated 3 months ago
- β80Updated this week
- ICCV2025β131Updated 3 weeks ago
- Official implementation of "OneTwoVLA: A Unified Vision-Language-Action Model with Adaptive Reasoning"β182Updated 3 months ago
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"β279Updated last year
- Official PyTorch Implementation of Unified Video Action Model (RSS 2025)β266Updated last month
- SimpleVLA-RL: Scaling VLA Training via Reinforcement Learningβ620Updated last week
- Building General-Purpose Robots Based on Embodied Foundation Modelβ432Updated last week
- Official repo of VLABench, a large scale benchmark designed for fairly evaluating VLA, Embodied Agent, and VLMs.β291Updated last month
- RynnVLA-001: A Vision-Language-Action Model Boosted by Generative Priorsβ167Updated 3 weeks ago
- Latest Advances on Vison-Language-Action Models.β108Updated 6 months ago
- The repo of paper `RoboMamba: Multimodal State Space Model for Efficient Robot Reasoning and Manipulation`β134Updated 8 months ago
- β263Updated last week
- Official implementation of "RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics"β149Updated last month
- β147Updated 3 weeks ago