kyegomez / SayCanLinks
Implementation of "Do As I Can, Not As I Say: Grounding Language in Robotic Affordances" by Google
☆20Updated 2 weeks ago
Alternatives and similar repositories for SayCan
Users that are interested in SayCan are comparing it to the libraries listed below
Sorting:
- Pytorch implementation of the models RT-1-X and RT-2-X from the paper: "Open X-Embodiment: Robotic Learning Datasets and RT-X Models"☆228Updated last week
- ☆156Updated last year
- Official Task Suite Implementation of ICML'23 Paper "VIMA: General Robot Manipulation with Multimodal Prompts"☆316Updated 2 years ago
- Codebase for paper: RoCo: Dialectic Multi-Robot Collaboration with Large Language Models☆225Updated 2 years ago
- ProgPrompt for Virtualhome☆141Updated 2 years ago
- A gym environment for ALOHA☆166Updated 3 weeks ago
- Implementation of Deepmind's RoboCat: "Self-Improving Foundation Agent for Robotic Manipulation" An next generation robot LLM☆86Updated 2 years ago
- Suite of human-collected datasets and a multi-task continuous control benchmark for open vocabulary visuolinguomotor learning.☆334Updated 4 months ago
- [ICLR 2024 Spotlight] Text2Reward: Reward Shaping with Language Models for Reinforcement Learning☆186Updated 10 months ago
- Generating Robotic Simulation Tasks via Large Language Models☆339Updated last year
- ☆230Updated last year
- ☆122Updated 4 months ago
- LoTa-Bench: Benchmarking Language-oriented Task Planners for Embodied Agents (ICLR 2024)☆79Updated 4 months ago
- Code for the RA-L paper "Language Models as Zero-Shot Trajectory Generators" available at https://arxiv.org/abs/2310.11604.☆102Updated 7 months ago
- Implementation of the transformer from the paper: "Real-World Humanoid Locomotion with Reinforcement Learning"☆54Updated 2 weeks ago
- Code repository for SMART-LLM: Smart Multi-Agent Robot Task Planning using Large Language Models☆164Updated last year
- Embodied Agent Interface (EAI): Benchmarking LLMs for Embodied Decision Making (NeurIPS D&B 2024 Oral)☆266Updated 7 months ago
- Code for Reinforcement Learning from Vision Language Foundation Model Feedback☆126Updated last year
- Codebase for HiP☆90Updated last year
- PyTorch implementation of YAY Robot☆163Updated last year
- ☆86Updated last year
- ☆65Updated last year
- An official implementation of Vision-Language Interpreter (ViLaIn)☆42Updated last year
- Implementation of Diffusion Policy, Toyota Research's supposed breakthrough in leveraging DDPMs for learning policies for real-world Robo…☆128Updated last year
- llm-based robot that intervenes only when needed☆35Updated 3 months ago
- Code for Prompt a Robot to Walk with Large Language Models https://arxiv.org/abs/2309.09969☆111Updated 2 years ago
- Official Code for "Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents"☆276Updated 3 years ago
- [ICCV'23] LLM-Planner: Few-Shot Grounded Planning for Embodied Agents with Large Language Models☆206Updated 7 months ago
- [CoRL 2023] REFLECT: Summarizing Robot Experiences for Failure Explanation and Correction☆101Updated last year
- Body Transformer: Leveraging Robot Embodiment for Policy Learning☆175Updated last month