MARS-EAI / VIKI-RLinks
VIKI‑R: Coordinating Embodied Multi-Agent Cooperation via Reinforcement Learning
☆39Updated last week
Alternatives and similar repositories for VIKI-R
Users that are interested in VIKI-R are comparing it to the libraries listed below
Sorting:
- [ICCV 2025] RoboFactory: Exploring Embodied Agent Collaboration with Compositional Constraints☆72Updated this week
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆138Updated 4 months ago
- 🦾 A Dual-System VLA with System2 Thinking☆99Updated last week
- The repo of paper `RoboMamba: Multimodal State Space Model for Efficient Robot Reasoning and Manipulation`☆132Updated 8 months ago
- Official code of paper "DeeR-VLA: Dynamic Inference of Multimodal Large Language Models for Efficient Robot Execution"☆104Updated 6 months ago
- Official implementation of Chain-of-Action: Trajectory Autoregressive Modeling for Robotic Manipulation☆61Updated last month
- Embodied-Reasoner: Synergizing Visual Search, Reasoning, and Action for Embodied Interactive Tasks☆163Updated 3 months ago
- ICCV2025☆114Updated this week
- Official repo of VLABench, a large scale benchmark designed for fairly evaluating VLA, Embodied Agent, and VLMs.☆281Updated 3 weeks ago
- Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning☆73Updated 3 months ago
- ☆78Updated 11 months ago
- Official implementation of "RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics"☆138Updated last month
- Official implementation of "OneTwoVLA: A Unified Vision-Language-Action Model with Adaptive Reasoning"☆178Updated 3 months ago
- Unified Vision-Language-Action Model☆181Updated last month
- [CVPR2024] This is the official implement of MP5☆103Updated last year
- [ICML 2025 Oral] Official repo of EmbodiedBench, a comprehensive benchmark designed to evaluate MLLMs as embodied agents.☆179Updated last month
- InstructVLA: Vision-Language-Action Instruction Tuning from Understanding to Manipulation☆39Updated last month
- DreamVLA: A Vision-Language-Action Model Dreamed with Comprehensive World Knowledge☆158Updated this week
- WorldVLA: Towards Autoregressive Action World Model☆363Updated last month
- ☆18Updated 3 weeks ago
- This repository compiles a list of papers related to the application of video technology in the field of robotics! Star⭐ the repo and fol…☆165Updated 7 months ago
- ☆53Updated 2 months ago
- [CVPR 2025] The offical Implementation of "Universal Actions for Enhanced Embodied Foundation Models"☆197Updated 5 months ago
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model☆277Updated 2 months ago
- ☆48Updated 5 months ago
- OST-Bench: Evaluating the Capabilities of MLLMs in Online Spatio-temporal Scene Understanding☆59Updated last month
- [ICCV2025 Oral] Latent Motion Token as the Bridging Language for Learning Robot Manipulation from Videos☆122Updated 3 months ago
- Code for FLIP: Flow-Centric Generative Planning for General-Purpose Manipulation Tasks☆72Updated 8 months ago
- LLaVA-VLA: A Simple Yet Powerful Vision-Language-Action Model [Actively Maintained🔥]☆134Updated this week
- Video Prediction Policy: A Generalist Robot Policy with Predictive Visual Representations https://video-prediction-policy.github.io☆252Updated 3 months ago