kyegomez / SayCan
Implementation of "Do As I Can, Not As I Say: Grounding Language in Robotic Affordances" by Google
☆16Updated 3 weeks ago
Alternatives and similar repositories for SayCan:
Users that are interested in SayCan are comparing it to the libraries listed below
- Code for "Interactive Task Planning with Language Models"☆25Updated last year
- ☆124Updated 7 months ago
- Codebase for HiP☆88Updated last year
- ☆44Updated 10 months ago
- ☆29Updated 5 months ago
- Implementation of Deepmind's RoboCat: "Self-Improving Foundation Agent for Robotic Manipulation" An next generation robot LLM☆81Updated last year
- ☆17Updated this week
- GROOT: Learning to Follow Instructions by Watching Gameplay Videos☆61Updated last year
- Instruction Following Agents with Multimodal Transforemrs☆52Updated 2 years ago
- ☆44Updated last year
- A vast array of Multi-Modal Embodied Robotic Foundation Models!☆25Updated 11 months ago
- The source code of the paper "Leveraging Pre-trained Large Language Models to Construct and Utilize World Models for Model-based Task Pla…☆83Updated 6 months ago
- [ICCV'23] LLM-Planner: Few-Shot Grounded Planning for Embodied Agents with Large Language Models☆162Updated 8 months ago
- [CoRL 2024] Official code for "Scaling Robot Policy Learning via Zero-Shot Labeling with Foundation Models"☆21Updated 2 months ago
- The official implementation of the paper "Read to Play (R2-Play): Decision Transformer with Multimodal Game Instruction".☆34Updated last year
- ☆73Updated 6 months ago
- Code for the Ask4Help project☆22Updated 2 years ago
- ☆44Updated last year
- ☆106Updated 3 months ago
- LoTa-Bench: Benchmarking Language-oriented Task Planners for Embodied Agents (ICLR 2024)☆65Updated 6 months ago
- Source codes for the paper "COMBO: Compositional World Models for Embodied Multi-Agent Cooperation"☆28Updated 10 months ago
- ProgPrompt for Virtualhome☆126Updated last year
- NeurIPS 2022 Paper "VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation"☆86Updated last year
- [ICLR 2024 Spotlight] Code for the paper "Text2Reward: Reward Shaping with Language Models for Reinforcement Learning"☆147Updated 2 months ago
- LLM Dynamic Planner - Combining LLM with PDDL Planners to solve an embodied task☆41Updated last month
- 🚀 Run AI2-THOR with Google Colab☆26Updated 2 years ago
- Official Implementation of ReALFRED (ECCV'24)☆35Updated 4 months ago
- [NeurIPS 2024] GenRL: Multimodal-foundation world models enable grounding language and video prompts into embodied domains, by turning th…☆68Updated last month
- ☆34Updated last month
- ☆72Updated last year