2toinf / X-VLAView external linksLinks
[ICLR 2026] The offical Implementation of "Soft-Prompted Transformer as Scalable Cross-Embodiment Vision-Language-Action Model"
☆502Feb 2, 2026Updated last week
Alternatives and similar repositories for X-VLA
Users that are interested in X-VLA are comparing it to the libraries listed below
Sorting:
- [ICRA 2026] 🌠DSPv2: Improved Dense Policy for Effective and Generalizable Whole-body Mobile Manipulation☆29Jan 14, 2026Updated 3 weeks ago
- Repository for the "AnywhereVLA: Language-Conditioned Exploration and Mobile Manipulation" paper☆18Oct 25, 2025Updated 3 months ago
- Galaxea's open-source VLA repository☆513Jan 17, 2026Updated 3 weeks ago
- Code for "ACG: Action Coherence Guidance for Flow-based VLA Models" (ICRA 2026)☆59Feb 3, 2026Updated last week
- [TASE 2025] Efficient Alignment of Unconditioned Action Prior for Language-conditioned Pick and Place in Clutter☆35Oct 27, 2025Updated 3 months ago
- [ICLR 2026] SimpleVLA-RL: Scaling VLA Training via Reinforcement Learning☆1,380Jan 6, 2026Updated last month
- [RSS 2025] Learning to Act Anywhere with Task-centric Latent Actions☆984Nov 19, 2025Updated 2 months ago
- 🦾 A Dual-System VLA with System2 Thinking☆132Aug 21, 2025Updated 5 months ago
- RLinf: Reinforcement Learning Infrastructure for Embodied and Agentic AI☆2,412Updated this week
- ☆387Feb 2, 2026Updated last week
- Building General-Purpose Robots Based on Embodied Foundation Model☆759Feb 3, 2026Updated last week
- A Foundational Vision-Language-Action Model for Synergizing Cognition and Action in Robotic Manipulation☆405Oct 30, 2025Updated 3 months ago
- Official code of RDT 2☆686Feb 7, 2026Updated last week
- RoboTwin 2.0 Offical Repo☆1,934Updated this week
- Evaluating and reproducing real-world robot manipulation policies (e.g., RT-1, RT-1-X, Octo) in simulation under common setups (e.g., Goo…☆968Dec 20, 2025Updated last month
- [ICRA 2025] CAGE: Causal Attention Enables Data-Efficient Generalizable Robotic Manipulation☆36Jan 14, 2025Updated last year
- RynnVLA-002: A Unified Vision-Language-Action and World Model☆875Dec 2, 2025Updated 2 months ago
- StarVLA: A Lego-like Codebase for Vision-Language-Action Model Developing☆1,098Updated this week
- [ICLR 2026] InstructVLA: Vision-Language-Action Instruction Tuning from Understanding to Manipulation☆96Jan 27, 2026Updated 2 weeks ago
- [CoRL 2025] GC-VLN: Instruction as Graph Constraints for Training-free Vision-and-Language Navigation☆63Sep 16, 2025Updated 4 months ago
- ☆33May 16, 2025Updated 8 months ago
- ☆38Apr 15, 2025Updated 9 months ago
- [ICML 2024] 3D-VLA: A 3D Vision-Language-Action Generative World Model☆618Oct 29, 2024Updated last year
- EO: Open-source Unified Embodied Foundation Model Series☆291Nov 12, 2025Updated 3 months ago
- 🔥 SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.☆645Jun 23, 2025Updated 7 months ago
- ☆10,160Dec 27, 2025Updated last month
- Open-source code of the paper: Real-to-Sim Robot Policy Evaluation with Gaussian Splatting Simulation of Soft-Body Interactions.☆164Nov 11, 2025Updated 3 months ago
- FieldGen is a semi-automatic data generation framework that enables scalable collection of diverse, high-quality real-world manipulation …☆25Oct 28, 2025Updated 3 months ago
- Imitation Learning; Robotics; Policy; VLA;☆30Updated this week
- ☆19Sep 25, 2025Updated 4 months ago
- (ECCV 2024) Official implementation of the Economic 6-DoF Grasp Detection Framework (EconomicGrasp).☆105May 28, 2025Updated 8 months ago
- [CVPR 2025] The offical Implementation of "Universal Actions for Enhanced Embodied Foundation Models"☆229Nov 6, 2025Updated 3 months ago
- Causal video-action world model for generalist robot control☆541Feb 6, 2026Updated last week
- [ICLR 2025] LAPA: Latent Action Pretraining from Videos☆460Jan 22, 2025Updated last year
- Code for [AAAI 2026] AffordDex: Towards Affordance-Aware Robotic Dexterous Grasping with Human-like Priors☆25Dec 26, 2025Updated last month
- ☆19Jun 26, 2025Updated 7 months ago
- ☆25Aug 20, 2025Updated 5 months ago
- OpenVLA: An open-source vision-language-action model for robotic manipulation.☆5,251Mar 23, 2025Updated 10 months ago
- RDT-1B: a Diffusion Foundation Model for Bimanual Manipulation☆1,614Jan 21, 2026Updated 3 weeks ago