Robot-MA / manipulate-anythingLinks
Manipulate-Anything: Automating Real-World Robots using Vision-Language Models [CoRL 2024]
☆44Updated 5 months ago
Alternatives and similar repositories for manipulate-anything
Users that are interested in manipulate-anything are comparing it to the libraries listed below
Sorting:
- Official implementation of RAM: Retrieval-Based Affordance Transfer for Generalizable Zero-Shot Robotic Manipulation☆96Updated 8 months ago
- Official implementation of CoPa: General Robotic Manipulation through Spatial Constraints of Parts with Foundation Models☆92Updated 7 months ago
- This code corresponds to simulation environments used as part of the DexMimicGen project.☆150Updated last month
- This is the repo of CoRL 2024 paper "Learning to Manipulate Anywhere: A Visual Generalizable Framework For Reinforcement Learning"☆75Updated 9 months ago
- ☆152Updated 5 months ago
- A Benchmark for Low-Level Manipulation in Home Rearrangement Tasks☆140Updated 2 weeks ago
- [RSS25] Official implementation of DemoGen: Synthetic Demonstration Generation for Data-Efficient Visuomotor Policy Learning☆198Updated last month
- [CoRL2024] ThinkGrasp: A Vision-Language System for Strategic Part Grasping in Clutter. https://arxiv.org/abs/2407.11298☆98Updated last month
- Simulated experiments for "Real-Time Execution of Action Chunking Flow Policies".☆231Updated last month
- ☆88Updated 2 months ago
- MOKA: Open-World Robotic Manipulation through Mark-based Visual Prompting (RSS 2024)☆85Updated last year
- ☆112Updated 10 months ago
- ☆60Updated 5 months ago
- DexGarmentLab: Dexterous Garment Manipulation Environment with Generalizable Policy☆78Updated last month
- [ICRA 25] FLaRe: Achieving Masterful and Adaptive Robot Policies with Large-Scale Reinforcement Learning Fine-Tuning☆35Updated 8 months ago
- Official implementation for paper "EquiBot: SIM(3)-Equivariant Diffusion Policy for Generalizable and Data Efficient Learning".☆154Updated last year
- ☆68Updated 8 months ago
- [IROS24 Oral]ManipVQA: Injecting Robotic Affordance and Physically Grounded Information into Multi-Modal Large Language Models☆98Updated last year
- [CoRL 2024] Im2Flow2Act: Flow as the Cross-domain Manipulation Interface☆136Updated 10 months ago
- Official implementation of "Towards Generalizable Vision-Language Robotic Manipulation: A Benchmark and LLM-guided 3D Policy."☆104Updated last week
- A simple testbed for robotics manipulation policies☆101Updated 5 months ago
- [RSS2025] Code for my paper "You Only Teach Once: Learn One-Shot Bimanual Robotic Manipulation from Video Demonstrations"☆103Updated 2 months ago
- [IROS 2025] Human Demo Videos to Robot Action Plans☆64Updated 2 months ago
- A Vision-Language Model for Spatial Affordance Prediction in Robotics☆183Updated last month
- Official Hardware Codebase for the Paper "BEHAVIOR Robot Suite: Streamlining Real-World Whole-Body Manipulation for Everyday Household Ac…☆109Updated 2 weeks ago
- [RSS 2024] Code for "Multimodal Diffusion Transformer: Learning Versatile Behavior from Multimodal Goals" for CALVIN experiments with pre…☆152Updated 10 months ago
- ☆56Updated 10 months ago
- Augment robotics demonstration datasets with different robots and viewpoints☆35Updated 6 months ago
- Code for PerAct², a language-conditioned imitation learning agent designed for bimanual robotic manipulation using the RLBench environmen…☆88Updated 6 months ago
- Fast-in-Slow: A Dual-System Foundation Model Unifying Fast Manipulation within Slow Reasoning☆93Updated last month