xf-zhao / Matcha-agentLinks
Official implementation of Matcha-agent, https://arxiv.org/abs/2303.08268
☆27Updated last year
Alternatives and similar repositories for Matcha-agent
Users that are interested in Matcha-agent are comparing it to the libraries listed below
Sorting:
- Public release for "Distillation and Retrieving Generalizable Knowledge for Robot Manipulation via Language Corrections"☆46Updated last year
- Official implementation for VIOLA☆120Updated 2 years ago
- [CoRL 2023] REFLECT: Summarizing Robot Experiences for Failure Explanation and Correction☆101Updated last year
- ☆11Updated last year
- This code corresponds to transformer training and evaluation code used as part of the OPTIMUS project.☆81Updated last year
- NeurIPS 2022 Paper "VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation"☆96Updated 5 months ago
- [ICLR 2024] PyTorch Code for Plan-Seq-Learn: Language Model Guided RL for Solving Long Horizon Robotics Tasks☆119Updated last year
- [IROS24 Oral]ManipVQA: Injecting Robotic Affordance and Physically Grounded Information into Multi-Modal Large Language Models☆97Updated last year
- Code for the RA-L paper "Language Models as Zero-Shot Trajectory Generators" available at https://arxiv.org/abs/2310.11604.☆102Updated 7 months ago
- ☆71Updated last year
- ☆41Updated last year
- Companion Codebase for "No, to the Right – Online Language Corrections for Robotic Manipulation via Shared Autonomy"☆27Updated 2 years ago
- ☆61Updated 6 months ago
- Paper: Integrating Action Knowledge and LLMs for Task Planning and Situation Handling in Open Worlds☆35Updated last year
- This repository provides the sample code designed to interpret human demonstration videos and convert them into high-level tasks for robo…☆42Updated 11 months ago
- ☆29Updated last year
- Chain-of-Thought Predictive Control☆58Updated 2 years ago
- An official implementation of Vision-Language Interpreter (ViLaIn)☆41Updated last year
- https://arxiv.org/abs/2312.10807☆74Updated 10 months ago
- A collection of papers, codes and talks of visual imitation learning/imitation learning from video for robotics.☆79Updated 2 years ago
- Mobile manipulation in Habitat☆93Updated 2 months ago
- Official repository for "LIV: Language-Image Representations and Rewards for Robotic Control" (ICML 2023)☆124Updated last year
- ☆50Updated 2 years ago
- Code for the paper: "Active Vision Might Be All You Need: Exploring Active Vision in Bimanual Robotic Manipulation"☆47Updated 3 weeks ago
- [ICRA2023] Grounding Language with Visual Affordances over Unstructured Data☆46Updated last year
- LLM3: Large Language Model-based Task and Motion Planning with Motion Failure Reasoning☆91Updated last year
- Official implementation of GROOT, CoRL 2023☆62Updated last year
- ☆45Updated last year
- ☆41Updated 5 months ago
- ☆90Updated last year