kyegomez / SayCanLinks
Implementation of "Do As I Can, Not As I Say: Grounding Language in Robotic Affordances" by Google
☆22Updated 2 months ago
Alternatives and similar repositories for SayCan
Users that are interested in SayCan are comparing it to the libraries listed below
Sorting:
- Implementation of Deepmind's RoboCat: "Self-Improving Foundation Agent for Robotic Manipulation" An next generation robot LLM☆87Updated 2 years ago
- ☆155Updated last year
- ProgPrompt for Virtualhome☆145Updated 2 years ago
- Pytorch implementation of the models RT-1-X and RT-2-X from the paper: "Open X-Embodiment: Robotic Learning Datasets and RT-X Models"☆231Updated 3 weeks ago
- Codebase for paper: RoCo: Dialectic Multi-Robot Collaboration with Large Language Models☆233Updated 2 years ago
- Official Task Suite Implementation of ICML'23 Paper "VIMA: General Robot Manipulation with Multimodal Prompts"☆320Updated 2 years ago
- Generating Robotic Simulation Tasks via Large Language Models☆341Updated last year
- Implementation of the transformer from the paper: "Real-World Humanoid Locomotion with Reinforcement Learning"☆60Updated 2 months ago
- A gym environment for ALOHA☆180Updated 2 months ago
- An official implementation of Vision-Language Interpreter (ViLaIn)☆45Updated last year
- [ICCV'23] LLM-Planner: Few-Shot Grounded Planning for Embodied Agents with Large Language Models☆209Updated 8 months ago
- ☆256Updated last year
- Code for the RA-L paper "Language Models as Zero-Shot Trajectory Generators" available at https://arxiv.org/abs/2310.11604.☆105Updated 9 months ago
- Code repository for SMART-LLM: Smart Multi-Agent Robot Task Planning using Large Language Models☆174Updated last year
- Embodied Agent Interface (EAI): Benchmarking LLMs for Embodied Decision Making (NeurIPS D&B 2024 Oral)☆273Updated 9 months ago
- NeurIPS 2022 Paper "VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation"☆96Updated 7 months ago
- Suite of human-collected datasets and a multi-task continuous control benchmark for open vocabulary visuolinguomotor learning.☆340Updated 3 weeks ago
- [ICLR 2024 Spotlight] Text2Reward: Reward Shaping with Language Models for Reinforcement Learning☆192Updated last year
- Body Transformer: Leveraging Robot Embodiment for Policy Learning☆180Updated 3 months ago
- Code for Prompt a Robot to Walk with Large Language Models https://arxiv.org/abs/2309.09969☆112Updated 2 years ago
- [CoRL 2023] REFLECT: Summarizing Robot Experiences for Failure Explanation and Correction☆101Updated last year
- ☆123Updated 6 months ago
- PyTorch implementation of YAY Robot☆168Updated last year
- [ICLR 2024] PyTorch Code for Plan-Seq-Learn: Language Model Guided RL for Solving Long Horizon Robotics Tasks☆119Updated last year
- Official Code for "Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents"☆277Updated 3 years ago
- llm-based robot that intervenes only when needed☆36Updated 4 months ago
- Codebase for HiP☆90Updated 2 years ago
- Code for "Hierarchical World Models as Visual Whole-Body Humanoid Controllers"☆196Updated 3 months ago
- https://arxiv.org/abs/2312.10807☆76Updated 3 weeks ago
- Enhancing LLM/VLM capability for robot task and motion planning with extra algorithm based tools.☆72Updated last year