[ICLR 2026] The offical Implementation of "Soft-Prompted Transformer as Scalable Cross-Embodiment Vision-Language-Action Model"
☆552Mar 9, 2026Updated 2 weeks ago
Alternatives and similar repositories for X-VLA
Users that are interested in X-VLA are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- StarVLA: A Lego-like Codebase for Vision-Language-Action Model Developing☆1,435Mar 20, 2026Updated last week
- CoRobot embodied data framework☆42Dec 9, 2025Updated 3 months ago
- Building General-Purpose Robots Based on Embodied Foundation Model☆799Feb 11, 2026Updated last month
- Official code of RDT 2☆748Feb 7, 2026Updated last month
- [ICLR 2026] SimpleVLA-RL: Scaling VLA Training via Reinforcement Learning☆1,524Jan 6, 2026Updated 2 months ago
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- RLinf: Reinforcement Learning Infrastructure for Embodied and Agentic AI☆2,815Mar 20, 2026Updated last week
- Galaxea's open-source VLA repository☆549Feb 14, 2026Updated last month
- Evaluating and reproducing real-world robot manipulation policies (e.g., RT-1, RT-1-X, Octo) in simulation under common setups (e.g., Goo…☆1,005Dec 20, 2025Updated 3 months ago
- Repository for the "AnywhereVLA: Language-Conditioned Exploration and Mobile Manipulation" paper☆23Oct 25, 2025Updated 5 months ago
- 🦾 A Dual-System VLA with System2 Thinking☆138Aug 21, 2025Updated 7 months ago
- A Foundational Vision-Language-Action Model for Synergizing Cognition and Action in Robotic Manipulation☆409Oct 30, 2025Updated 4 months ago
- [RSS 2025] Learning to Act Anywhere with Task-centric Latent Actions☆1,027Nov 19, 2025Updated 4 months ago
- ☆422Mar 11, 2026Updated 2 weeks ago
- Code for "ACG: Action Coherence Guidance for Flow-based Vision-Language-Action Models" (ICRA 2026)☆74Mar 11, 2026Updated 2 weeks ago
- Open source password manager - Proton Pass • AdSecurely store, share, and autofill your credentials with Proton Pass, the end-to-end encrypted password manager trusted by millions.
- ☆10,755Mar 20, 2026Updated last week
- ☆46Apr 15, 2025Updated 11 months ago
- RoboTwin 2.0 Offical Repo☆2,082Mar 16, 2026Updated last week
- [TASE 2025] Efficient Alignment of Unconditioned Action Prior for Language-conditioned Pick and Place in Clutter☆35Oct 27, 2025Updated 5 months ago
- Official code for the long-horizon language-conditioned robotic manipulation benchmark LoHoRavens.☆22Oct 8, 2024Updated last year
- Benchmarking Knowledge Transfer in Lifelong Robot Learning☆1,644Mar 15, 2025Updated last year
- EO: Open-source Unified Embodied Foundation Model Series☆296Nov 12, 2025Updated 4 months ago
- AC-DiT: Adaptive Coordination Diffusion Transformer for Mobile Manipulation☆37Feb 23, 2026Updated last month
- Gaussian Splatting for Robotic Simulation☆23Nov 7, 2025Updated 4 months ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- [ICLR 2025] LAPA: Latent Action Pretraining from Videos☆492Jan 22, 2025Updated last year
- Causal video-action world model for generalist robot control☆892Feb 27, 2026Updated last month
- RynnVLA-002: A Unified Vision-Language-Action and World Model☆955Dec 2, 2025Updated 3 months ago
- [NeurIPS 2025] DreamVLA: A Vision-Language-Action Model Dreamed with Comprehensive World Knowledge☆309Jan 6, 2026Updated 2 months ago
- [IROS 2025 Best Paper Award Finalist & IEEE TRO 2026] The Large-scale Manipulation Platform for Scalable and Intelligent Embodied Systems☆2,830Dec 16, 2025Updated 3 months ago
- [ICML 2024] 3D-VLA: A 3D Vision-Language-Action Generative World Model☆621Oct 29, 2024Updated last year
- [CVPR 2025] The offical Implementation of "Universal Actions for Enhanced Embodied Foundation Models"☆233Nov 6, 2025Updated 4 months ago
- OpenVLA: An open-source vision-language-action model for robotic manipulation.☆5,622Mar 23, 2025Updated last year
- ☆19Jun 26, 2025Updated 9 months ago
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- [ICLR 2026] InstructVLA: Vision-Language-Action Instruction Tuning from Understanding to Manipulation☆108Jan 27, 2026Updated 2 months ago
- Official code of Motus: A Unified Latent Action World Model☆870Jan 5, 2026Updated 2 months ago
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆159Apr 6, 2025Updated 11 months ago
- ☆33May 16, 2025Updated 10 months ago
- 🔥 SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.☆673Jun 23, 2025Updated 9 months ago
- RDT-1B: a Diffusion Foundation Model for Bimanual Manipulation☆1,650Jan 21, 2026Updated 2 months ago
- [RSS25] Official implementation of DemoGen: Synthetic Demonstration Generation for Data-Efficient Visuomotor Policy Learning☆240Jul 18, 2025Updated 8 months ago