kodenii / Responsible-Robotic-ManipulationLinks
Responsible Robotic Manipulation
☆13Updated 2 months ago
Alternatives and similar repositories for Responsible-Robotic-Manipulation
Users that are interested in Responsible-Robotic-Manipulation are comparing it to the libraries listed below
Sorting:
- ☆66Updated 2 weeks ago
- 🦾 A Dual-System VLA with System2 Thinking☆114Updated 2 months ago
- ☆84Updated last year
- [NeurIPS 2025] VIKI‑R: Coordinating Embodied Multi-Agent Cooperation via Reinforcement Learning☆54Updated 2 weeks ago
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆145Updated 7 months ago
- ☆31Updated last year
- Evaluate Multimodal LLMs as Embodied Agents☆54Updated 8 months ago
- Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning☆75Updated 5 months ago
- ☆60Updated 10 months ago
- ☆34Updated 3 months ago
- Official code for "Embodied-R1: Reinforced Embodied Reasoning for General Robotic Manipulation"☆98Updated 2 months ago
- ☆38Updated 4 months ago
- [ICML 2024] The offical Implementation of "DecisionNCE: Embodied Multimodal Representations via Implicit Preference Learning"☆81Updated 5 months ago
- [ICCV 2025] RoboFactory: Exploring Embodied Agent Collaboration with Compositional Constraints☆92Updated 2 months ago
- HAZARD challenge☆36Updated 6 months ago
- [ICRA 2025] RACER: Rich Language-Guided Failure Recovery Policies for Imitation Learning☆37Updated last year
- [CVPR 2025] Official implementation of "GenManip: LLM-driven Simulation for Generalizable Instruction-Following Manipulation"☆75Updated last week
- [ICML 2025 Oral] Official repo of EmbodiedBench, a comprehensive benchmark designed to evaluate MLLMs as embodied agents.☆207Updated 2 weeks ago
- VLA-RFT: Vision-Language-Action Models with Reinforcement Fine-Tuning☆76Updated last month
- Official repository of LIBERO-plus, a generalized benchmark for in-depth robustness analysis of vision-language-action models.☆124Updated this week
- [ICML 2025] OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction☆109Updated 6 months ago
- Code for FLIP: Flow-Centric Generative Planning for General-Purpose Manipulation Tasks☆76Updated 10 months ago
- ICCV2025☆140Updated 2 months ago
- ☆75Updated last year
- [ICCV2025 Oral] Latent Motion Token as the Bridging Language for Learning Robot Manipulation from Videos☆145Updated last month
- Official Implementation of CAPEAM (ICCV'23)☆13Updated 11 months ago
- Code & data for "RoboGround: Robotic Manipulation with Grounded Vision-Language Priors" (CVPR 2025)☆27Updated 5 months ago
- ☆77Updated 5 months ago
- Official implemetation of the paper "InSpire: Vision-Language-Action Models with Intrinsic Spatial Reasoning"☆43Updated last month
- ☆32Updated last year