Necolizer / RM-PRTLinks
Realistic Robotic Manipulation Simulator and Benchmark with Progressive Reasoning Tasks
☆24Updated 10 months ago
Alternatives and similar repositories for RM-PRT
Users that are interested in RM-PRT are comparing it to the libraries listed below
Sorting:
- Code to evaluate a solution in the BEHAVIOR benchmark: starter code, baselines, submodules to iGibson and BDDL repos☆63Updated last year
- ☆29Updated 8 months ago
- Enhancing LLM/VLM capability for robot task and motion planning with extra algorithm based tools.☆69Updated 8 months ago
- MiniGrid Implementation of BEHAVIOR Tasks☆46Updated 9 months ago
- (NeurIPS '22) LISA: Learning Interpretable Skill Abstractions - A framework for unsupervised skill learning using Imitation☆29Updated 2 years ago
- ☆45Updated last year
- NeurIPS 2022 Paper "VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation"☆91Updated 3 weeks ago
- Official code for the paper: Embodied Multi-Modal Agent trained by an LLM from a Parallel TextWorld☆57Updated 8 months ago
- ☆33Updated last year
- Chain-of-Thought Predictive Control☆57Updated 2 years ago
- Prompter for Embodied Instruction Following☆18Updated last year
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"☆44Updated last year
- Code for the paper Watch-And-Help: A Challenge for Social Perception and Human-AI Collaboration☆97Updated 2 years ago
- InterPreT: Interactive Predicate Learning from Language Feedback for Generalizable Task Planning (RSS 2024)☆30Updated 11 months ago
- [ICRA2023] Grounding Language with Visual Affordances over Unstructured Data☆43Updated last year
- ☆48Updated last year
- The project repository for paper EMOS: Embodiment-aware Heterogeneous Multi-robot Operating System with LLM Agents: https://arxiv.org/abs…☆42Updated 5 months ago
- RobotVQA is a project that develops a Deep Learning-based Cognitive Vision System to support household robots' perception while they perf…☆17Updated 10 months ago
- Code for training embodied agents using imitation learning at scale in Habitat-Lab☆42Updated last month
- Official implementation of Matcha-agent, https://arxiv.org/abs/2303.08268☆26Updated 9 months ago
- ☆83Updated last year
- Public release for "Distillation and Retrieving Generalizable Knowledge for Robot Manipulation via Language Corrections"☆44Updated 11 months ago
- RoboTHOR Challenge☆90Updated 4 years ago
- Implementation of Language-Conditioned Path Planning (Amber Xie, Youngwoon Lee, Pieter Abbeel, Stephen James)☆23Updated last year
- Code for CVPR22 paper One Step at a Time: Long-Horizon Vision-and-Language Navigation with Milestones☆13Updated 2 years ago
- [ICRA 2025] RACER: Rich Language-Guided Failure Recovery Policies for Imitation Learning☆30Updated 7 months ago
- Evaluate Multimodal LLMs as Embodied Agents☆49Updated 3 months ago
- Code for ICRA24 paper "Think, Act, and Ask: Open-World Interactive Personalized Robot Navigation" Paper//arxiv.org/abs/2310.07968 …☆31Updated 11 months ago
- Pytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation"☆78Updated 11 months ago
- [CoRL 2023] REFLECT: Summarizing Robot Experiences for Failure Explanation and Correction☆94Updated last year