xf-zhao / Matcha-agentLinks
Official implementation of Matcha-agent, https://arxiv.org/abs/2303.08268
☆27Updated last year
Alternatives and similar repositories for Matcha-agent
Users that are interested in Matcha-agent are comparing it to the libraries listed below
Sorting:
- Public release for "Distillation and Retrieving Generalizable Knowledge for Robot Manipulation via Language Corrections"☆46Updated last year
- NeurIPS 2022 Paper "VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation"☆96Updated 6 months ago
- [CoRL 2023] REFLECT: Summarizing Robot Experiences for Failure Explanation and Correction☆101Updated last year
- Code for the RA-L paper "Language Models as Zero-Shot Trajectory Generators" available at https://arxiv.org/abs/2310.11604.☆102Updated 7 months ago
- LLM3: Large Language Model-based Task and Motion Planning with Motion Failure Reasoning☆93Updated last year
- Official implementation for VIOLA☆121Updated 2 years ago
- ☆46Updated last year
- https://arxiv.org/abs/2312.10807☆75Updated 11 months ago
- This code corresponds to transformer training and evaluation code used as part of the OPTIMUS project.☆82Updated 2 years ago
- [IROS24 Oral]ManipVQA: Injecting Robotic Affordance and Physically Grounded Information into Multi-Modal Large Language Models☆97Updated last year
- Companion Codebase for "No, to the Right – Online Language Corrections for Robotic Manipulation via Shared Autonomy"☆28Updated 2 years ago
- Implementation of Language-Conditioned Path Planning (Amber Xie, Youngwoon Lee, Pieter Abbeel, Stephen James)☆24Updated 2 years ago
- ☆42Updated last year
- [ICRA2023] Grounding Language with Visual Affordances over Unstructured Data☆46Updated 2 years ago
- Chain-of-Thought Predictive Control☆58Updated 2 years ago
- Codebase for paper: RoCo: Dialectic Multi-Robot Collaboration with Large Language Models☆225Updated 2 years ago
- [ICLR 2024] PyTorch Code for Plan-Seq-Learn: Language Model Guided RL for Solving Long Horizon Robotics Tasks☆119Updated last year
- An official implementation of Vision-Language Interpreter (ViLaIn)☆42Updated last year
- ☆11Updated last year
- ☆41Updated last year
- Code for the paper Robot Data Curation with Mutual Information Estimators☆22Updated 6 months ago
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"☆44Updated last year
- Mobile manipulation in Habitat☆95Updated 2 months ago
- Implementation of Deepmind's RoboCat: "Self-Improving Foundation Agent for Robotic Manipulation" An next generation robot LLM☆86Updated 2 years ago
- Official repository for "LIV: Language-Image Representations and Rewards for Robotic Control" (ICML 2023)☆126Updated 2 years ago
- This repository provides the sample code designed to interpret human demonstration videos and convert them into high-level tasks for robo…☆44Updated last year
- ☆83Updated 2 years ago
- A collection of papers, codes and talks of visual imitation learning/imitation learning from video for robotics.☆79Updated 2 years ago
- Paper: Integrating Action Knowledge and LLMs for Task Planning and Situation Handling in Open Worlds☆35Updated last year
- ☆50Updated 2 years ago