xf-zhao / Matcha-agentLinks
Official implementation of Matcha-agent, https://arxiv.org/abs/2303.08268
☆27Updated last year
Alternatives and similar repositories for Matcha-agent
Users that are interested in Matcha-agent are comparing it to the libraries listed below
Sorting:
- Public release for "Distillation and Retrieving Generalizable Knowledge for Robot Manipulation via Language Corrections"☆46Updated last year
- NeurIPS 2022 Paper "VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation"☆96Updated 7 months ago
- [CoRL 2023] REFLECT: Summarizing Robot Experiences for Failure Explanation and Correction☆101Updated last year
- LLM3: Large Language Model-based Task and Motion Planning with Motion Failure Reasoning☆95Updated last year
- ☆46Updated last year
- https://arxiv.org/abs/2312.10807☆76Updated 3 weeks ago
- Official implementation for VIOLA☆122Updated 2 years ago
- [IROS24 Oral]ManipVQA: Injecting Robotic Affordance and Physically Grounded Information into Multi-Modal Large Language Models☆98Updated last year
- [ICLR 2024] PyTorch Code for Plan-Seq-Learn: Language Model Guided RL for Solving Long Horizon Robotics Tasks☆119Updated last year
- [ICRA2023] Grounding Language with Visual Affordances over Unstructured Data☆45Updated 2 years ago
- This code corresponds to transformer training and evaluation code used as part of the OPTIMUS project.☆82Updated 2 years ago
- Code for the RA-L paper "Language Models as Zero-Shot Trajectory Generators" available at https://arxiv.org/abs/2310.11604.☆105Updated 9 months ago
- Code base for See to Touch project: https://see-to-touch.github.io/☆52Updated 2 years ago
- ☆50Updated 2 years ago
- An official implementation of Vision-Language Interpreter (ViLaIn)☆45Updated last year
- Chain-of-Thought Predictive Control☆57Updated 2 years ago
- ☆11Updated last year
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"☆44Updated last year
- Mobile manipulation in Habitat☆98Updated 4 months ago
- ☆41Updated last year
- ProgPrompt for Virtualhome☆145Updated 2 years ago
- ☆93Updated last year
- ☆42Updated last year
- A collection of papers, codes and talks of visual imitation learning/imitation learning from video for robotics.☆79Updated 3 years ago
- Paper: Integrating Action Knowledge and LLMs for Task Planning and Situation Handling in Open Worlds☆35Updated last year
- ☆76Updated last year
- Implementation of Language-Conditioned Path Planning (Amber Xie, Youngwoon Lee, Pieter Abbeel, Stephen James)☆25Updated 2 years ago
- ☆68Updated 8 months ago
- Decomposing the Generalization Gap in Imitation Learning for Visual Robotic Manipulation (2023)☆44Updated 2 years ago
- Code for Prompt a Robot to Walk with Large Language Models https://arxiv.org/abs/2309.09969☆112Updated 2 years ago