microsoft / GPT4Vision-Robot-Manipulation-PromptsLinks
This repository provides the sample code designed to interpret human demonstration videos and convert them into high-level tasks for robots.
☆40Updated 7 months ago
Alternatives and similar repositories for GPT4Vision-Robot-Manipulation-Prompts
Users that are interested in GPT4Vision-Robot-Manipulation-Prompts are comparing it to the libraries listed below
Sorting:
- [IROS24 Oral]ManipVQA: Injecting Robotic Affordance and Physically Grounded Information into Multi-Modal Large Language Models☆94Updated 10 months ago
- A simple testbed for robotics manipulation policies☆94Updated 2 months ago
- This is the official implementation of RoboBERT, which is a novel end-to-end mutiple-modality robotic operations training framework.☆51Updated last month
- ☆54Updated 2 months ago
- Code for the paper: "Active Vision Might Be All You Need: Exploring Active Vision in Bimanual Robotic Manipulation"☆36Updated 3 months ago
- Official Code Repo for GENIMA☆72Updated 8 months ago
- Manipulate-Anything: Automating Real-World Robots using Vision-Language Models [CoRL 2024]☆34Updated 2 months ago
- [IROS 2025] Human Demo Videos to Robot Action Plans☆55Updated last week
- ☆46Updated 7 months ago
- Official implementation of CoPa: General Robotic Manipulation through Spatial Constraints of Parts with Foundation Models☆83Updated 5 months ago
- Official implementation for paper "EquiBot: SIM(3)-Equivariant Diffusion Policy for Generalizable and Data Efficient Learning".☆148Updated 11 months ago
- Public release for "Distillation and Retrieving Generalizable Knowledge for Robot Manipulation via Language Corrections"☆44Updated last year
- A collection of papers, codes and talks of visual imitation learning/imitation learning from video for robotics.☆72Updated 2 years ago
- [CoRL 2023] REFLECT: Summarizing Robot Experiences for Failure Explanation and Correction☆94Updated last year
- RLAfford: End-to-End Affordance Learning for Robotic Manipulation, ICRA 2023☆112Updated last year
- [CoRL2024] ThinkGrasp: A Vision-Language System for Strategic Part Grasping in Clutter. https://arxiv.org/abs/2407.11298☆75Updated 2 weeks ago
- [ICRA 2025] PyTorch Code for Local Policies Enable Zero-shot Long-Horizon Manipulation☆112Updated 2 months ago
- This is a repository for RobustDexGrasp, which achieves robust dexterous grasping of 500+ unseen objects with random poses from single-vi…☆50Updated last month
- ☆41Updated last year
- ☆59Updated last year
- code implementation of GraspGPT and FoundationGrasp☆117Updated last month
- This is the repo of "Learning to Manipulate Anywhere: A Visual Generalizable Framework For Reinforcement Learning"☆63Updated 6 months ago
- Code for Point Policy: Unifying Observations and Actions with Key Points for Robot Manipulation☆70Updated 3 months ago
- Waypoint-Based Imitation Learning for Robotic Manipulation☆116Updated last year
- Fix bug and add a new task☆37Updated 8 months ago
- [CVPR 2024] Hierarchical Diffusion Policy for Multi-Task Robotic Manipulation☆183Updated last year
- Official Hardware Codebase for the Paper "BEHAVIOR Robot Suite: Streamlining Real-World Whole-Body Manipulation for Everyday Household Ac…☆93Updated 3 months ago
- ☆63Updated 4 months ago
- [ICRA 2023] A Joint Modeling of Vision-Language-Action for Target-oriented Grasping in Clutter☆141Updated 2 months ago
- Code for paper "Diff-Control: A stateful Diffusion-based Policy for Imitation Learning" (Liu et al., IROS 2024)☆61Updated 3 weeks ago