pzhren / SurferLinks
A World Model-Based Framework for Vision-Language Robot Manipulation
☆27Updated last week
Alternatives and similar repositories for Surfer
Users that are interested in Surfer are comparing it to the libraries listed below
Sorting:
- ProgPrompt for Virtualhome☆138Updated 2 years ago
- NeurIPS 2022 Paper "VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation"☆96Updated 2 months ago
- [arXiv 2023] Embodied Task Planning with Large Language Models☆188Updated last year
- ☆45Updated last year
- Official codebase for EmbCLIP☆129Updated 2 years ago
- Codebase for paper: RoCo: Dialectic Multi-Robot Collaboration with Large Language Models☆215Updated last year
- ☆31Updated 10 months ago
- Official code release of AAAI 2024 paper SayCanPay.☆49Updated last year
- ☆83Updated 2 years ago
- [CoRL 2023] REFLECT: Summarizing Robot Experiences for Failure Explanation and Correction☆97Updated last year
- Prompter for Embodied Instruction Following☆18Updated last year
- Official code for the paper: Embodied Multi-Modal Agent trained by an LLM from a Parallel TextWorld☆57Updated 10 months ago
- Code to evaluate a solution in the BEHAVIOR benchmark: starter code, baselines, submodules to iGibson and BDDL repos☆66Updated last year
- Chain-of-Thought Predictive Control☆58Updated 2 years ago
- The project repository for paper EMOS: Embodiment-aware Heterogeneous Multi-robot Operating System with LLM Agents: https://arxiv.org/abs…☆46Updated 7 months ago
- MiniGrid Implementation of BEHAVIOR Tasks☆49Updated 11 months ago
- Code for the paper Watch-And-Help: A Challenge for Social Perception and Human-AI Collaboration☆97Updated 3 years ago
- Code for CVPR22 paper One Step at a Time: Long-Horizon Vision-and-Language Navigation with Milestones☆13Updated 3 years ago
- Official implementation of Matcha-agent, https://arxiv.org/abs/2303.08268☆26Updated 11 months ago
- Code for training embodied agents using imitation learning at scale in Habitat-Lab☆43Updated 3 months ago
- Enhancing LLM/VLM capability for robot task and motion planning with extra algorithm based tools.☆74Updated 10 months ago
- Code for the paper Bootstrap Your Own Skills: Learning to Solve New Tasks with Large Language Model Guidance, accepted to CoRL 2023 as an…☆32Updated 3 weeks ago
- ☆34Updated last year
- 🚀 Run AI2-THOR with Google Colab☆33Updated 3 years ago
- 🔀 Visual Room Rearrangement☆121Updated last year
- (NeurIPS '22) LISA: Learning Interpretable Skill Abstractions - A framework for unsupervised skill learning using Imitation☆31Updated 2 years ago
- A mini-framework for running AI2-Thor with Docker.☆37Updated last year
- Code for training embodied agents using IL and RL finetuning at scale for ObjectNav☆77Updated 3 months ago
- ☆17Updated 7 months ago
- Code for ICRA24 paper "Think, Act, and Ask: Open-World Interactive Personalized Robot Navigation" Paper//arxiv.org/abs/2310.07968 …☆31Updated last year