microsoft / GPT4Vision-Robot-Manipulation-PromptsLinks
This repository provides the sample code designed to interpret human demonstration videos and convert them into high-level tasks for robots.
☆42Updated 9 months ago
Alternatives and similar repositories for GPT4Vision-Robot-Manipulation-Prompts
Users that are interested in GPT4Vision-Robot-Manipulation-Prompts are comparing it to the libraries listed below
Sorting:
- ☆59Updated 4 months ago
- ☆41Updated 2 weeks ago
- Manipulate-Anything: Automating Real-World Robots using Vision-Language Models [CoRL 2024]☆43Updated 5 months ago
- Official Hardware Codebase for the Paper "BEHAVIOR Robot Suite: Streamlining Real-World Whole-Body Manipulation for Everyday Household Ac…☆108Updated last week
- [ICRA 2025] PyTorch Code for Local Policies Enable Zero-shot Long-Horizon Manipulation☆117Updated 4 months ago
- PyTorch implementation of YAY Robot☆155Updated last year
- [IROS24 Oral]ManipVQA: Injecting Robotic Affordance and Physically Grounded Information into Multi-Modal Large Language Models☆97Updated last year
- Code for Point Policy: Unifying Observations and Actions with Key Points for Robot Manipulation☆75Updated last month
- ☆31Updated 10 months ago
- A curated list of awesome open-source grasping libraries and resources☆59Updated last month
- [ICRA 2024] AirExo: Low-Cost Exoskeletons for Learning Whole-Arm Manipulation in the Wild☆45Updated last year
- ☆57Updated 7 months ago
- ☆66Updated last year
- Accompanying codebase for paper"Touch begins where vision ends: Generalizable policies for contact-rich manipulation"☆85Updated 2 months ago
- PyTorch Code for Neural MP: A Generalist Neural Motion Planner☆128Updated 9 months ago
- A collection of papers, codes and talks of visual imitation learning/imitation learning from video for robotics.☆78Updated 2 years ago
- ACE: A Cross-platform Visual-Exoskeletons for Low-Cost Dexterous Teleoperation☆112Updated 11 months ago
- An official implementation of Vision-Language Interpreter (ViLaIn)☆41Updated last year
- Code for the RA-L paper "Language Models as Zero-Shot Trajectory Generators" available at https://arxiv.org/abs/2310.11604.☆102Updated 5 months ago
- Code for the paper: "Active Vision Might Be All You Need: Exploring Active Vision in Bimanual Robotic Manipulation"☆42Updated 5 months ago
- ☆89Updated 11 months ago
- ☆55Updated 9 months ago
- This is a repository for RobustDexGrasp, which achieves robust dexterous grasping of 500+ unseen objects with random poses from single-vi…☆96Updated 2 weeks ago
- Public release for "Distillation and Retrieving Generalizable Knowledge for Robot Manipulation via Language Corrections"☆46Updated last year
- Official Algorithm Codebase for the Paper "BEHAVIOR Robot Suite: Streamlining Real-World Whole-Body Manipulation for Everyday Household A…☆137Updated last week
- (RA-L 2025) UniT: Data Efficient Tactile Representation with Generalization to Unseen Objects☆56Updated 4 months ago
- Official code for CVPR'23 paper: Learning Human-to-Robot Handovers from Point Clouds☆106Updated 5 months ago
- Code for paper "Diff-Control: A stateful Diffusion-based Policy for Imitation Learning" (Liu et al., IROS 2024)☆65Updated 3 months ago
- ☆115Updated last month
- A library of long-horizon Task-and-Motion-Planning (TAMP) problems in kitchen and household scenes, as well as planners to solve them☆142Updated 3 months ago