tasl-lab / LaMMA-PLinks
☆12Updated 2 weeks ago
Alternatives and similar repositories for LaMMA-P
Users that are interested in LaMMA-P are comparing it to the libraries listed below
Sorting:
- Paper: Integrating Action Knowledge and LLMs for Task Planning and Situation Handling in Open Worlds☆35Updated last year
- Enhancing LLM/VLM capability for robot task and motion planning with extra algorithm based tools.☆69Updated 8 months ago
- [NeurIPS 2024] PIVOT-R: Primitive-Driven Waypoint-Aware World Model for Robotic Manipulation☆38Updated 7 months ago
- Winner of Maniskill2 Challenge☆2Updated 11 months ago
- LLM3: Large Language Model-based Task and Motion Planning with Motion Failure Reasoning☆83Updated last year
- Public release for "Distillation and Retrieving Generalizable Knowledge for Robot Manipulation via Language Corrections"☆44Updated 11 months ago
- ☆17Updated 5 months ago
- InterPreT: Interactive Predicate Learning from Language Feedback for Generalizable Task Planning (RSS 2024)☆30Updated 11 months ago
- ☆33Updated last year
- ☆52Updated last month
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"☆44Updated last year
- Companion Codebase for "No, to the Right – Online Language Corrections for Robotic Manipulation via Shared Autonomy"☆27Updated 2 years ago
- Code for the paper "Policy Adaptation via Language Optimization: Decomposing Tasks for Few-Shot Imitation"☆29Updated 6 months ago
- https://arxiv.org/abs/2312.10807☆71Updated 6 months ago
- ☆38Updated 3 weeks ago
- Official implementation of Matcha-agent, https://arxiv.org/abs/2303.08268☆26Updated 9 months ago
- Official Implementation of CausalMoMa (RSS2023)☆22Updated 2 years ago
- [IROS24 Oral]ManipVQA: Injecting Robotic Affordance and Physically Grounded Information into Multi-Modal Large Language Models☆93Updated 9 months ago
- ☆59Updated last year
- A collection of papers, codes and talks of visual imitation learning/imitation learning from video for robotics.☆69Updated 2 years ago
- RobotVQA is a project that develops a Deep Learning-based Cognitive Vision System to support household robots' perception while they perf…☆17Updated 10 months ago
- This is the repo of "Learning to Manipulate Anywhere: A Visual Generalizable Framework For Reinforcement Learning"☆60Updated 5 months ago
- Official implementation of "OneTwoVLA: A Unified Vision-Language-Action Model with Adaptive Reasoning"☆59Updated last week
- A simple testbed for robotics manipulation policies☆90Updated last month
- [CoRL 2023] REFLECT: Summarizing Robot Experiences for Failure Explanation and Correction☆94Updated last year
- Manipulate-Anything: Automating Real-World Robots using Vision-Language Models [CoRL 2024]☆29Updated 2 months ago
- Code to evaluate a solution in the BEHAVIOR benchmark: starter code, baselines, submodules to iGibson and BDDL repos☆63Updated last year
- ☆40Updated 9 months ago
- ☆10Updated 10 months ago
- Official implementation of GROOT, CoRL 2023☆60Updated last year