RoboChallenge / RoboChallengeInferenceLinks
RoboChallenge Inference example code
☆64Updated last month
Alternatives and similar repositories for RoboChallengeInference
Users that are interested in RoboChallengeInference are comparing it to the libraries listed below
Sorting:
- NORA: A Small Open-Sourced Generalist Vision Language Action Model for Embodied Tasks☆198Updated last month
- Official implementation of Chain-of-Action: Trajectory Autoregressive Modeling for Robotic Manipulation. Accepted in NeurIPS 2025.☆90Updated 2 weeks ago
- Embodied-Reasoner: Synergizing Visual Search, Reasoning, and Action for Embodied Interactive Tasks☆185Updated 3 months ago
- [CVPR 2025] The offical Implementation of "Universal Actions for Enhanced Embodied Foundation Models"☆222Updated last month
- ☆344Updated last week
- Official repository of LIBERO-plus, a generalized benchmark for in-depth robustness analysis of vision-language-action models.☆151Updated 2 weeks ago
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆154Updated 8 months ago
- InstructVLA: Vision-Language-Action Instruction Tuning from Understanding to Manipulation☆88Updated 3 months ago
- RynnVLA-001: Using Human Demonstrations to Improve Robot Manipulation☆274Updated 3 weeks ago
- [ICCV 2025] RoboFactory: Exploring Embodied Agent Collaboration with Compositional Constraints☆97Updated 3 months ago
- Video Prediction Policy: A Generalist Robot Policy with Predictive Visual Representations https://video-prediction-policy.github.io☆314Updated 7 months ago
- Official repo of VLABench, a large scale benchmark designed for fairly evaluating VLA, Embodied Agent, and VLMs.☆355Updated last month
- Evo-1: Lightweight Vision-Language-Action Model with Preserved Semantic Alignment☆192Updated 2 weeks ago
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model☆331Updated 2 months ago
- MiMo-Embodied☆333Updated last month
- 🦾 A Dual-System VLA with System2 Thinking☆123Updated 4 months ago
- F1: A Vision Language Action Model Bridging Understanding and Generation to Actions☆150Updated 2 months ago
- ☆101Updated 2 months ago
- Official code for "Embodied-R1: Reinforced Embodied Reasoning for General Robotic Manipulation"☆115Updated 4 months ago
- Unified Vision-Language-Action Model☆257Updated 2 months ago
- Official implementation for BitVLA: 1-bit Vision-Language-Action Models for Robotics Manipulation☆98Updated 5 months ago
- VLAC: A Vision-Language-Action-Critic Model for Robotic Real-World Reinforcement Learning☆254Updated 3 months ago
- Official implementation of "OneTwoVLA: A Unified Vision-Language-Action Model with Adaptive Reasoning"☆205Updated 7 months ago
- OpenHelix: An Open-source Dual-System VLA Model for Robotic Manipulation☆332Updated 4 months ago
- Official code of paper "DeeR-VLA: Dynamic Inference of Multimodal Large Language Models for Efficient Robot Execution"☆122Updated 10 months ago
- Official Code for EnerVerse-AC: Envisioning EmbodiedEnvironments with Action Condition☆142Updated 5 months ago
- OpenVLA: An open-source vision-language-action model for robotic manipulation.☆321Updated 9 months ago
- A comprehensive list of papers about dual-system VLA models, including papers, codes, and related websites.☆94Updated last month
- [ICML 2025 Oral] Official repo of EmbodiedBench, a comprehensive benchmark designed to evaluate MLLMs as embodied agents.☆245Updated 2 months ago
- Official code for EWMBench: Evaluating Scene, Motion, and Semantic Quality in Embodied World Models☆95Updated 6 months ago