kyegomez / SayCanLinks
Implementation of "Do As I Can, Not As I Say: Grounding Language in Robotic Affordances" by Google
☆23Updated last week
Alternatives and similar repositories for SayCan
Users that are interested in SayCan are comparing it to the libraries listed below
Sorting:
- Official Task Suite Implementation of ICML'23 Paper "VIMA: General Robot Manipulation with Multimodal Prompts"☆324Updated 2 years ago
- ☆158Updated last year
- ProgPrompt for Virtualhome☆145Updated 2 years ago
- Pytorch implementation of the models RT-1-X and RT-2-X from the paper: "Open X-Embodiment: Robotic Learning Datasets and RT-X Models"☆234Updated last week
- Codebase for paper: RoCo: Dialectic Multi-Robot Collaboration with Large Language Models☆234Updated 2 years ago
- Code for the RA-L paper "Language Models as Zero-Shot Trajectory Generators" available at https://arxiv.org/abs/2310.11604.☆105Updated 10 months ago
- Implementation of Deepmind's RoboCat: "Self-Improving Foundation Agent for Robotic Manipulation" An next generation robot LLM☆87Updated 2 years ago
- Generating Robotic Simulation Tasks via Large Language Models☆342Updated last year
- An official implementation of Vision-Language Interpreter (ViLaIn)☆47Updated last year
- llm-based robot that intervenes only when needed☆36Updated 5 months ago
- Code for Prompt a Robot to Walk with Large Language Models https://arxiv.org/abs/2309.09969☆112Updated 2 years ago
- Code repository for SMART-LLM: Smart Multi-Agent Robot Task Planning using Large Language Models☆180Updated last year
- PyTorch implementation of YAY Robot☆169Updated last year
- ☆261Updated last year
- [ICLR 2024 Spotlight] Text2Reward: Reward Shaping with Language Models for Reinforcement Learning☆194Updated last year
- Suite of human-collected datasets and a multi-task continuous control benchmark for open vocabulary visuolinguomotor learning.☆343Updated last week
- Implementation of the transformer from the paper: "Real-World Humanoid Locomotion with Reinforcement Learning"☆61Updated last week
- A gym environment for ALOHA☆184Updated 3 months ago
- Body Transformer: Leveraging Robot Embodiment for Policy Learning☆183Updated 4 months ago
- Embodied Agent Interface (EAI): Benchmarking LLMs for Embodied Decision Making (NeurIPS D&B 2024 Oral)☆278Updated 10 months ago
- [ICLR 2024] PyTorch Code for Plan-Seq-Learn: Language Model Guided RL for Solving Long Horizon Robotics Tasks☆120Updated last year
- Code for Reinforcement Learning from Vision Language Foundation Model Feedback☆135Updated last year
- Enhancing LLM/VLM capability for robot task and motion planning with extra algorithm based tools.☆74Updated last year
- Implementation of "PaLM-E: An Embodied Multimodal Language Model"☆333Updated last year
- Official implementation of Matcha-agent, https://arxiv.org/abs/2303.08268☆27Updated last year
- NeurIPS 2022 Paper "VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation"☆98Updated 8 months ago
- LLM3: Large Language Model-based Task and Motion Planning with Motion Failure Reasoning☆95Updated last year
- Official code release of AAAI 2024 paper SayCanPay.☆53Updated 2 months ago
- Official implementation for VIOLA☆122Updated 2 years ago
- Code for subgoal synthesis via image editing☆144Updated 2 years ago