OpenMOSS / VLABenchLinks
Official repo of VLABench, a large scale benchmark designed for fairly evaluating VLA, Embodied Agent, and VLMs.
☆310Updated 2 months ago
Alternatives and similar repositories for VLABench
Users that are interested in VLABench are comparing it to the libraries listed below
Sorting:
- Embodied Chain of Thought: A robotic policy that reason to solve the task.☆312Updated 6 months ago
- Single-file implementation to advance vision-language-action (VLA) models with reinforcement learning.☆313Updated last month
- [ICLR 2025 Oral] Seer: Predictive Inverse Dynamics Models are Scalable Learners for Robotic Manipulation☆247Updated 3 months ago
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆144Updated 6 months ago
- Evaluating and reproducing real-world robot manipulation policies (e.g., RT-1, RT-1-X, Octo, and OpenVLA) in simulation under common setu…☆222Updated 4 months ago
- OpenHelix: An Open-source Dual-System VLA Model for Robotic Manipulation☆307Updated 2 months ago
- starVLA: A Lego-like Codebase for Vision-Language-Action Model Developing☆203Updated last week
- ☆184Updated 2 months ago
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model☆297Updated 3 weeks ago
- [ICLR 2025] LAPA: Latent Action Pretraining from Videos☆387Updated 9 months ago
- ICCV2025☆135Updated 2 months ago
- Official implementation of "OneTwoVLA: A Unified Vision-Language-Action Model with Adaptive Reasoning"☆191Updated 4 months ago
- This repository summarizes recent advances in the VLA + RL paradigm and provides a taxonomic classification of relevant works.☆300Updated 2 weeks ago
- ☆403Updated 9 months ago
- InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policy☆191Updated last week
- Official PyTorch Implementation of Unified Video Action Model (RSS 2025)☆278Updated 3 months ago
- [ICCV 2025] RoboFactory: Exploring Embodied Agent Collaboration with Compositional Constraints☆88Updated last month
- A Foundational Vision-Language-Action Model for Synergizing Cognition and Action in Robotic Manipulation☆362Updated 5 months ago
- An example RLDS dataset builder for X-embodiment dataset conversion.☆40Updated 7 months ago
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"☆285Updated last year
- [NeurIPS 2025 Spotlight] SoFar: Language-Grounded Orientation Bridges Spatial Reasoning and Object Manipulation☆197Updated 3 months ago
- Video Prediction Policy: A Generalist Robot Policy with Predictive Visual Representations https://video-prediction-policy.github.io☆283Updated 5 months ago
- [ICML 2025 Oral] Official repo of EmbodiedBench, a comprehensive benchmark designed to evaluate MLLMs as embodied agents.☆200Updated last week
- Official codebase for "Any-point Trajectory Modeling for Policy Learning"☆256Updated 4 months ago
- ☆226Updated last year
- Embodied Reasoning Question Answer (ERQA) Benchmark☆232Updated 7 months ago
- Reimplementation of GR-1, a generalized policy for robotics manipulation.☆143Updated last year
- Code for Reinforcement Learning from Vision Language Foundation Model Feedback☆126Updated last year
- OpenVLA: An open-source vision-language-action model for robotic manipulation.☆271Updated 7 months ago
- 🔥 SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.☆541Updated 4 months ago