OpenGVLab / Instruct2ActLinks
Instruct2Act: Mapping Multi-modality Instructions to Robotic Actions with Large Language Model
☆365Updated last year
Alternatives and similar repositories for Instruct2Act
Users that are interested in Instruct2Act are comparing it to the libraries listed below
Sorting:
- Code for RoboFlamingo☆388Updated last year
- Official Task Suite Implementation of ICML'23 Paper "VIMA: General Robot Manipulation with Multimodal Prompts"☆306Updated last year
- [arXiv 2023] Embodied Task Planning with Large Language Models☆188Updated last year
- Embodied Chain of Thought: A robotic policy that reason to solve the task.☆267Updated 2 months ago
- Democratization of RT-2 "RT-2: New model translates vision and language into action"☆475Updated 11 months ago
- [ICML 2024] Official code repository for 3D embodied generalist agent LEO☆443Updated 2 months ago
- Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Success☆481Updated last month
- Pytorch implementation of the models RT-1-X and RT-2-X from the paper: "Open X-Embodiment: Robotic Learning Datasets and RT-X Models"☆211Updated last week
- [ICML 2024] 3D-VLA: A 3D Vision-Language-Action Generative World Model☆531Updated 7 months ago
- ☆362Updated 5 months ago
- Implementation of "PaLM-E: An Embodied Multimodal Language Model"☆312Updated last year
- Paper list in the survey paper: Toward General-Purpose Robots via Foundation Models: A Survey and Meta-Analysis☆434Updated 5 months ago
- VoxPoser: Composable 3D Value Maps for Robotic Manipulation with Language Models☆708Updated 4 months ago
- Evaluating and reproducing real-world robot manipulation policies (e.g., RT-1, RT-1-X, Octo) in simulation under common setups (e.g., Goo…☆661Updated 2 months ago
- Generating Robotic Simulation Tasks via Large Language Models☆327Updated last year
- Official Code for RVT-2 and RVT☆347Updated 4 months ago
- [RSS 2025] Learning to Act Anywhere with Task-centric Latent Actions☆474Updated this week
- Benchmarking Knowledge Transfer in Lifelong Robot Learning☆629Updated 3 months ago
- ☆207Updated 2 months ago
- 🔥 SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.☆360Updated this week
- [CoRL 2023] This repository contains data generation and training code for Scaling Up & Distilling Down☆393Updated 10 months ago
- Code for the paper "3D Diffuser Actor: Policy Diffusion with 3D Scene Representations"☆324Updated 10 months ago
- A Survey of Embodied Learning for Object-Centric Robotic Manipulation☆220Updated 8 months ago
- A Foundational Vision-Language-Action Model for Synergizing Cognition and Action in Robotic Manipulation☆288Updated 3 weeks ago
- The official codebase for ManipLLM: Embodied Multimodal Large Language Model for Object-Centric Robotic Manipulation(cvpr 2024)☆133Updated 11 months ago
- "MimicPlay: Long-Horizon Imitation Learning by Watching Human Play" code repository☆272Updated last year
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"☆261Updated last year
- CALVIN - A benchmark for Language-Conditioned Policy Learning for Long-Horizon Robot Manipulation Tasks☆597Updated 4 months ago
- ☆188Updated last year
- Official repo of VLABench, a large scale benchmark designed for fairly evaluating VLA, Embodied Agent, and VLMs.☆238Updated 3 weeks ago