say-can / say-can.github.ioLinks
☆17Updated 3 years ago
Alternatives and similar repositories for say-can.github.io
Users that are interested in say-can.github.io are comparing it to the libraries listed below
Sorting:
- Official repository of ICLR 2022 paper FILM: Following Instructions in Language with Modular Methods☆127Updated 2 years ago
- NeurIPS 2022 Paper "VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation"☆98Updated 9 months ago
- Hierarchical Universal Language Conditioned Policies☆77Updated last year
- [ICCV 2023] ARNOLD: Language-Grounded Robot Manipulation with Continuous Object States in Realistic 3D Scenes☆180Updated 10 months ago
- Code to evaluate a solution in the BEHAVIOR benchmark: starter code, baselines, submodules to iGibson and BDDL repos☆69Updated last year
- 🔀 Visual Room Rearrangement☆125Updated 2 years ago
- Official Task Suite Implementation of ICML'23 Paper "VIMA: General Robot Manipulation with Multimodal Prompts"☆325Updated 2 years ago
- [ICRA2023] Grounding Language with Visual Affordances over Unstructured Data☆45Updated 2 years ago
- ☆124Updated 7 months ago
- Voltron: Language-Driven Representation Learning for Robotics☆233Updated 2 years ago
- PyTorch implementation of the Hiveformer research paper☆49Updated 2 years ago
- [CoRL 2023] REFLECT: Summarizing Robot Experiences for Failure Explanation and Correction☆101Updated last year
- [MMM 2025 Best Paper] RoLD: Robot Latent Diffusion for Multi-Task Policy Modeling☆22Updated last year
- Utility functions when working with Ai2-THOR. Try to do one thing once.☆56Updated 3 years ago
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"☆45Updated last year
- Official implementation of Matcha-agent, https://arxiv.org/abs/2303.08268☆27Updated last year
- Codebase for paper: RoCo: Dialectic Multi-Robot Collaboration with Large Language Models☆238Updated 2 years ago
- VRKitchen: an Interactive 3D Environment for Learning Real Life Cooking Tasks. Visit the project site for more information: https://sites…☆24Updated last year
- Pre-training Reusable Representations for Robotic Manipulation Using Diverse Human Video Data☆364Updated 2 years ago
- Companion Codebase for "No, to the Right – Online Language Corrections for Robotic Manipulation via Shared Autonomy"☆28Updated 3 years ago
- Official implementation for VIOLA☆121Updated 2 years ago
- Official Code for "Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents"☆278Updated 3 years ago
- ☆263Updated last year
- ☆45Updated 3 years ago
- LLM3: Large Language Model-based Task and Motion Planning with Motion Failure Reasoning☆95Updated last year
- Official repository for "VIP: Towards Universal Visual Reward and Representation via Value-Implicit Pre-Training"☆179Updated 2 years ago
- Episodic Transformer (E.T.) is a novel attention-based architecture for vision-and-language navigation. E.T. is based on a multimodal tra…☆93Updated 2 years ago
- Suite of human-collected datasets and a multi-task continuous control benchmark for open vocabulary visuolinguomotor learning.☆346Updated last month
- ☆47Updated last year
- Official codebase for EmbCLIP☆131Updated 2 years ago