pzhren / SurferLinks
A World Model-Based Framework for Vision-Language Robot Manipulation
☆29Updated 3 weeks ago
Alternatives and similar repositories for Surfer
Users that are interested in Surfer are comparing it to the libraries listed below
Sorting:
- ProgPrompt for Virtualhome☆141Updated 2 years ago
- Codebase for paper: RoCo: Dialectic Multi-Robot Collaboration with Large Language Models☆227Updated 2 years ago
- ☆83Updated 2 years ago
- Code to evaluate a solution in the BEHAVIOR benchmark: starter code, baselines, submodules to iGibson and BDDL repos☆69Updated last year
- [CoRL 2023] REFLECT: Summarizing Robot Experiences for Failure Explanation and Correction☆101Updated last year
- ☆46Updated last year
- Official implementation of Matcha-agent, https://arxiv.org/abs/2303.08268☆27Updated last year
- Prompter for Embodied Instruction Following☆18Updated last year
- MiniGrid Implementation of BEHAVIOR Tasks☆56Updated last month
- Official code for the paper: Embodied Multi-Modal Agent trained by an LLM from a Parallel TextWorld☆59Updated last year
- Code for Reinforcement Learning from Vision Language Foundation Model Feedback☆126Updated last year
- [arXiv 2023] Embodied Task Planning with Large Language Models☆192Updated 2 years ago
- NeurIPS 2022 Paper "VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation"☆96Updated 6 months ago
- Official code release of AAAI 2024 paper SayCanPay.☆50Updated 3 weeks ago
- Paper: Integrating Action Knowledge and LLMs for Task Planning and Situation Handling in Open Worlds☆35Updated last year
- Official codebase for EmbCLIP☆132Updated 2 years ago
- [ICLR 2024] PyTorch Code for Plan-Seq-Learn: Language Model Guided RL for Solving Long Horizon Robotics Tasks☆119Updated last year
- ☆36Updated 2 years ago
- Enhancing LLM/VLM capability for robot task and motion planning with extra algorithm based tools.☆73Updated last year
- An official implementation of Vision-Language Interpreter (ViLaIn)☆44Updated last year
- Pytorch implementation of the models RT-1-X and RT-2-X from the paper: "Open X-Embodiment: Robotic Learning Datasets and RT-X Models"☆229Updated last week
- Chain-of-Thought Predictive Control☆58Updated 2 years ago
- [ICCV'23] LLM-Planner: Few-Shot Grounded Planning for Embodied Agents with Large Language Models☆208Updated 7 months ago
- ☆122Updated 4 months ago
- ☆45Updated 2 years ago
- ☆32Updated last year
- Code for training embodied agents using imitation learning at scale in Habitat-Lab☆44Updated 7 months ago
- LLM3: Large Language Model-based Task and Motion Planning with Motion Failure Reasoning☆93Updated last year
- ☆156Updated last year
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"☆44Updated last year