microsoft / GPT4Vision-Robot-Manipulation-PromptsLinks
This repository provides the sample code designed to interpret human demonstration videos and convert them into high-level tasks for robots.
☆46Updated last year
Alternatives and similar repositories for GPT4Vision-Robot-Manipulation-Prompts
Users that are interested in GPT4Vision-Robot-Manipulation-Prompts are comparing it to the libraries listed below
Sorting:
- ☆68Updated 8 months ago
- [ICRA 2025] PyTorch Code for Local Policies Enable Zero-shot Long-Horizon Manipulation☆132Updated 8 months ago
- PyTorch implementation of YAY Robot☆168Updated last year
- [IROS24 Oral]ManipVQA: Injecting Robotic Affordance and Physically Grounded Information into Multi-Modal Large Language Models☆98Updated last year
- Official implementation for paper "EquiBot: SIM(3)-Equivariant Diffusion Policy for Generalizable and Data Efficient Learning".☆163Updated last year
- ☆131Updated last month
- LLM3: Large Language Model-based Task and Motion Planning with Motion Failure Reasoning☆95Updated last year
- ☆52Updated last month
- Official Hardware Codebase for the Paper "BEHAVIOR Robot Suite: Streamlining Real-World Whole-Body Manipulation for Everyday Household Ac…☆129Updated last month
- A curated list of awesome open-source grasping libraries and resources☆60Updated 5 months ago
- Manipulate-Anything: Automating Real-World Robots using Vision-Language Models [CoRL 2024]☆49Updated 8 months ago
- A PyTorch re-implementation of the RT-1 (Robotics Transformer) with training and testing pipeline☆59Updated last year
- A collection of papers, codes and talks of visual imitation learning/imitation learning from video for robotics.☆79Updated 3 years ago
- A simple testbed for robotics manipulation policies☆103Updated 8 months ago
- [NeurIPS 2025 Spotlight 🎊] DexGarmentLab: Dexterous Garment Manipulation Environment with Generalizable Policy☆109Updated 3 months ago
- This repository contains the implementation of Ground4Act, a two-stage approach for collaborative pushing and grasping in clutter using a…☆33Updated 8 months ago
- This is the official implementation of RoboBERT, which is a novel end-to-end mutiple-modality robotic operations training framework.☆57Updated 6 months ago
- Code for the paper: "Active Vision Might Be All You Need: Exploring Active Vision in Bimanual Robotic Manipulation"☆51Updated 2 months ago
- Accompanying codebase for paper"Touch begins where vision ends: Generalizable policies for contact-rich manipulation"☆99Updated 5 months ago
- ☆76Updated last year
- Public release for "Distillation and Retrieving Generalizable Knowledge for Robot Manipulation via Language Corrections"☆46Updated last year
- Waypoint-Based Imitation Learning for Robotic Manipulation☆129Updated last year
- A library of long-horizon Task-and-Motion-Planning (TAMP) problems in kitchen and household scenes, as well as planners to solve them☆158Updated 7 months ago
- ☆42Updated last year
- This is a repository for RobustDexGrasp, which achieves robust dexterous grasping of 500+ unseen objects with random poses from single-vi…☆126Updated 4 months ago
- RLAfford: End-to-End Affordance Learning for Robotic Manipulation, ICRA 2023☆121Updated last year
- Official implementation for paper "Adaptive Compliance Policy Learning Approximate Compliance for Diffusion Guided Control".☆93Updated last year
- An official implementation of Vision-Language Interpreter (ViLaIn)☆45Updated last year
- code implementation of GraspGPT and FoundationGrasp☆136Updated this week
- Code for paper "Diff-Control: A stateful Diffusion-based Policy for Imitation Learning" (Liu et al., IROS 2024)☆71Updated 6 months ago