aopolin-lv / RoboMP2Links
[ICML 2024] RoboMP2: A Robotic Multimodal Perception-Planning Framework with Multimodal Large Language Models
☆12Updated 6 months ago
Alternatives and similar repositories for RoboMP2
Users that are interested in RoboMP2 are comparing it to the libraries listed below
Sorting:
- LIBERO-PRO is the official repository of the LIBERO-PRO — an evaluation extension of the original LIBERO benchmark☆147Updated 3 weeks ago
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆154Updated 9 months ago
- An example RLDS dataset builder for X-embodiment dataset conversion.☆55Updated 10 months ago
- [CoRL 2023] REFLECT: Summarizing Robot Experiences for Failure Explanation and Correction☆101Updated last year
- [ICRA 2025] RACER: Rich Language-Guided Failure Recovery Policies for Imitation Learning☆40Updated last year
- The official codebase for ManipLLM: Embodied Multimodal Large Language Model for Object-Centric Robotic Manipulation(cvpr 2024)☆145Updated last year
- [IROS24 Oral]ManipVQA: Injecting Robotic Affordance and Physically Grounded Information into Multi-Modal Large Language Models☆98Updated last year
- ☆47Updated last year
- ☆30Updated last year
- Official implementation of GR-MG☆93Updated 11 months ago
- ☆33Updated last year
- A list of robotics related papers accepted by ICLR'25☆25Updated 4 months ago
- Efficiently apply modification functions to RLDS/TFDS datasets.☆40Updated last year
- ☆41Updated 6 months ago
- MOKA: Open-World Robotic Manipulation through Mark-based Visual Prompting (RSS 2024)☆94Updated last year
- A simple testbed for robotics manipulation policies☆103Updated 8 months ago
- Reimplementation of GR-1, a generalized policy for robotics manipulation.☆146Updated last year
- Code for PerAct², a language-conditioned imitation learning agent designed for bimanual robotic manipulation using the RLBench environmen…☆110Updated 10 months ago
- official repo for AGNOSTOS, a cross-task manipulation benchmark, and X-ICM method, a cross-task in-context manipulation (VLA) method☆53Updated last month
- Code for Reinforcement Learning from Vision Language Foundation Model Feedback☆135Updated last year
- ☆47Updated last year
- Team Comet's 2025 BEHAVIOR Challenge Codebase☆183Updated last week
- Code for ICRA24 paper "Think, Act, and Ask: Open-World Interactive Personalized Robot Navigation" Paper//arxiv.org/abs/2310.07968 …☆31Updated last year
- Official implementation of "OneTwoVLA: A Unified Vision-Language-Action Model with Adaptive Reasoning"☆205Updated 7 months ago
- A collection of vision-language-action model post-training methods.☆113Updated 2 months ago
- ☆62Updated last year
- ☆34Updated last year
- [ICML 2025] OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction☆112Updated 8 months ago
- Interactive Post-Training for Vision-Language-Action Models☆156Updated 7 months ago
- Embodied Chain of Thought: A robotic policy that reason to solve the task.☆350Updated 9 months ago