OpenHelix-Team / UD-VLALinks
☆36Updated this week
Alternatives and similar repositories for UD-VLA
Users that are interested in UD-VLA are comparing it to the libraries listed below
Sorting:
- [Actively Maintained🔥] A list of Embodied AI papers accepted by top conferences (ICLR, NeurIPS, ICML, RSS, CoRL, ICRA, IROS, CVPR, ICCV,…☆401Updated last week
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model☆309Updated last month
- starVLA: A Lego-like Codebase for Vision-Language-Action Model Developing☆387Updated this week
- This repository summarizes recent advances in the VLA + RL paradigm and provides a taxonomic classification of relevant works.☆310Updated 3 weeks ago
- OpenHelix: An Open-source Dual-System VLA Model for Robotic Manipulation☆314Updated 2 months ago
- Single-file implementation to advance vision-language-action (VLA) models with reinforcement learning.☆325Updated this week
- Soft-Prompted Transformer as Scalable Cross-Embodiment Vision-Language-Action Model☆157Updated 3 weeks ago
- 🔥 SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.☆560Updated 4 months ago
- InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policy☆230Updated last week
- A curated list of large VLM-based VLA models for robotic manipulation.☆231Updated last month
- [NeurIPS 2025 Spotlight] SoFar: Language-Grounded Orientation Bridges Spatial Reasoning and Object Manipulation☆200Updated 4 months ago
- [ICLR 2025 Oral] Seer: Predictive Inverse Dynamics Models are Scalable Learners for Robotic Manipulation☆252Updated 4 months ago
- Official repo of VLABench, a large scale benchmark designed for fairly evaluating VLA, Embodied Agent, and VLMs.☆319Updated 3 months ago
- [NeurIPS 2025] DreamVLA: A Vision-Language-Action Model Dreamed with Comprehensive World Knowledge☆213Updated last month
- A collection of vision-language-action model post-training methods.☆109Updated last week
- [RSS 2025] Learning to Act Anywhere with Task-centric Latent Actions☆819Updated this week
- ☆337Updated last week
- ICCV2025☆140Updated 2 months ago
- ☆192Updated 2 months ago
- Official implementation of the paper: "StreamVLN: Streaming Vision-and-Language Navigation via SlowFast Context Modeling"☆285Updated last week
- [Arxiv 2025] Official code for MoLe-VLA: Dynamic Layer-skipping Vision Language Action Model via Mixture-of-Layers for Efficient Robot Ma…☆49Updated 3 months ago
- WorldVLA: Towards Autoregressive Action World Model☆511Updated 3 weeks ago
- Pytorch PI-zero and PI-zero-fast. Adapted from LeRobot☆143Updated 2 months ago
- RoboScholar: A Comprehensive Paper List of Embodied AI and Robotics Research☆165Updated 3 weeks ago
- Unified Vision-Language-Action Model☆223Updated 3 weeks ago
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆145Updated 7 months ago
- [RSS 2024 & RSS 2025] VLN-CE evaluation code of NaVid and Uni-NaVid☆299Updated 3 weeks ago
- Video Prediction Policy: A Generalist Robot Policy with Predictive Visual Representations https://video-prediction-policy.github.io☆287Updated 5 months ago
- ☆405Updated 9 months ago
- LLaVA-VLA: A Simple Yet Powerful Vision-Language-Action Model [Actively Maintained🔥]☆167Updated last week