MARS-EAI / RoboFactoryLinks
[ICCV 2025] RoboFactory: Exploring Embodied Agent Collaboration with Compositional Constraints
☆105Updated 5 months ago
Alternatives and similar repositories for RoboFactory
Users that are interested in RoboFactory are comparing it to the libraries listed below
Sorting:
- Official repository of LIBERO-plus, a generalized benchmark for in-depth robustness analysis of vision-language-action models.☆208Updated 2 weeks ago
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆156Updated 10 months ago
- Official repo of VLABench, a large scale benchmark designed for fairly evaluating VLA, Embodied Agent, and VLMs.☆379Updated 2 months ago
- Official implementation of "OneTwoVLA: A Unified Vision-Language-Action Model with Adaptive Reasoning"☆207Updated 8 months ago
- Code for FLIP: Flow-Centric Generative Planning for General-Purpose Manipulation Tasks☆79Updated last year
- [NeurIPS 2025] VIKI‑R: Coordinating Embodied Multi-Agent Cooperation via Reinforcement Learning☆70Updated last month
- Official implementation of Chain-of-Action: Trajectory Autoregressive Modeling for Robotic Manipulation. Accepted in NeurIPS 2025.☆96Updated last month
- Official Implementation of Paper: WMPO: World Model-based Policy Optimization for Vision-Language-Action Models☆146Updated last month
- Official code for "Embodied-R1: Reinforced Embodied Reasoning for General Robotic Manipulation"☆121Updated 5 months ago
- [ICML 2025 Oral] Official repo of EmbodiedBench, a comprehensive benchmark designed to evaluate MLLMs as embodied agents.☆262Updated 3 months ago
- Interactive Post-Training for Vision-Language-Action Models☆158Updated 8 months ago
- ☆233Updated 5 months ago
- [ICCV2025 Oral] Latent Motion Token as the Bridging Language for Learning Robot Manipulation from Videos☆162Updated 4 months ago
- ICCV2025☆151Updated last month
- [ICLR 2026] InstructVLA: Vision-Language-Action Instruction Tuning from Understanding to Manipulation☆94Updated last week
- Official PyTorch Implementation of Unified Video Action Model (RSS 2025)☆325Updated 6 months ago
- Implementation of VLM4VLA☆106Updated this week
- [CVPR 2025] Official implementation of "GenManip: LLM-driven Simulation for Generalizable Instruction-Following Manipulation"☆139Updated 3 weeks ago
- F1: A Vision Language Action Model Bridging Understanding and Generation to Actions☆159Updated last month
- InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policy☆355Updated last month
- Single-file implementation to advance vision-language-action (VLA) models with reinforcement learning.☆394Updated 2 months ago
- [ICLR 2025 Oral] Seer: Predictive Inverse Dynamics Models are Scalable Learners for Robotic Manipulation☆277Updated 6 months ago
- LIBERO-PRO is the official repository of the LIBERO-PRO — an evaluation extension of the original LIBERO benchmark☆167Updated 3 weeks ago
- 🦾 A Dual-System VLA with System2 Thinking☆132Updated 5 months ago
- Official code of paper "DeeR-VLA: Dynamic Inference of Multimodal Large Language Models for Efficient Robot Execution"☆124Updated 11 months ago
- [NeurIPS 2024] CLOVER: Closed-Loop Visuomotor Control with Generative Expectation for Robotic Manipulation☆132Updated 4 months ago
- ☆67Updated last year
- Team Comet's 2025 BEHAVIOR Challenge Codebase☆214Updated last month
- Official repository of Learning to Act from Actionless Videos through Dense Correspondences.☆247Updated last year
- Cosmos Policy☆393Updated 2 weeks ago