microsoft / GPT4Vision-Robot-Manipulation-PromptsLinks
This repository provides the sample code designed to interpret human demonstration videos and convert them into high-level tasks for robots.
☆45Updated last year
Alternatives and similar repositories for GPT4Vision-Robot-Manipulation-Prompts
Users that are interested in GPT4Vision-Robot-Manipulation-Prompts are comparing it to the libraries listed below
Sorting:
- ☆75Updated 10 months ago
- [ICRA 2025] PyTorch Code for Local Policies Enable Zero-shot Long-Horizon Manipulation☆138Updated 9 months ago
- A curated list of awesome open-source grasping libraries and resources☆62Updated 6 months ago
- ☆45Updated last year
- [IROS24 Oral]ManipVQA: Injecting Robotic Affordance and Physically Grounded Information into Multi-Modal Large Language Models☆99Updated last year
- Manipulate-Anything: Automating Real-World Robots using Vision-Language Models [CoRL 2024]☆53Updated 10 months ago
- PyTorch implementation of YAY Robot☆169Updated last year
- ☆52Updated 3 months ago
- An official implementation of Vision-Language Interpreter (ViLaIn)☆47Updated last year
- Official implementation of CoPa: General Robotic Manipulation through Spatial Constraints of Parts with Foundation Models☆102Updated last year
- LeRobot extension for Franka Robot and XHand. An instantiation of LeVR Framework.☆98Updated 4 months ago
- Official implementation for paper "Adaptive Compliance Policy Learning Approximate Compliance for Diffusion Guided Control".☆106Updated last year
- A library of long-horizon Task-and-Motion-Planning (TAMP) problems in kitchen and household scenes, as well as planners to solve them☆162Updated 8 months ago
- Waypoint-Based Imitation Learning for Robotic Manipulation☆137Updated last year
- Public release for "Distillation and Retrieving Generalizable Knowledge for Robot Manipulation via Language Corrections"☆48Updated last year
- LLM3: Large Language Model-based Task and Motion Planning with Motion Failure Reasoning☆95Updated last year
- ☆55Updated 3 weeks ago
- [ICLR 2024] PyTorch Code for Plan-Seq-Learn: Language Model Guided RL for Solving Long Horizon Robotics Tasks☆123Updated last year
- This is the official implementation of RoboBERT, which is a novel end-to-end mutiple-modality robotic operations training framework.☆58Updated 8 months ago
- Accompanying codebase for paper"Touch begins where vision ends: Generalizable policies for contact-rich manipulation"☆99Updated 7 months ago
- This is the repo of CoRL 2024 paper "Learning to Manipulate Anywhere: A Visual Generalizable Framework For Reinforcement Learning"☆82Updated last year
- ☆63Updated 4 months ago
- This repository contains the implementation of Ground4Act, a two-stage approach for collaborative pushing and grasping in clutter using a…☆33Updated 10 months ago
- ☆45Updated 8 months ago
- RLAfford: End-to-End Affordance Learning for Robotic Manipulation, ICRA 2023☆124Updated last year
- A collection of papers, codes and talks of visual imitation learning/imitation learning from video for robotics.☆79Updated 3 years ago
- Code for paper "Diff-Control: A stateful Diffusion-based Policy for Imitation Learning" (Liu et al., IROS 2024)☆73Updated 8 months ago
- [RSS2025] Code for my paper "You Only Teach Once: Learn One-Shot Bimanual Robotic Manipulation from Video Demonstrations"☆129Updated 6 months ago
- 🎉[ICLR 2026] Flowing from Vision to Action: Noise-Free Flow Matching Policy Learning☆81Updated this week
- code implementation of GraspGPT and FoundationGrasp☆141Updated last month