OpenGVLab / Instruct2ActLinks
Instruct2Act: Mapping Multi-modality Instructions to Robotic Actions with Large Language Model
☆370Updated last year
Alternatives and similar repositories for Instruct2Act
Users that are interested in Instruct2Act are comparing it to the libraries listed below
Sorting:
- Code for RoboFlamingo☆403Updated last year
- [arXiv 2023] Embodied Task Planning with Large Language Models☆192Updated 2 years ago
- Implementation of "PaLM-E: An Embodied Multimodal Language Model"☆325Updated last year
- Pytorch implementation of the models RT-1-X and RT-2-X from the paper: "Open X-Embodiment: Robotic Learning Datasets and RT-X Models"☆225Updated this week
- Official Task Suite Implementation of ICML'23 Paper "VIMA: General Robot Manipulation with Multimodal Prompts"☆316Updated 2 years ago
- Democratization of RT-2 "RT-2: New model translates vision and language into action"☆515Updated last year
- [ICML 2024] 3D-VLA: A 3D Vision-Language-Action Generative World Model☆575Updated 11 months ago
- Generating Robotic Simulation Tasks via Large Language Models☆336Updated last year
- Embodied Chain of Thought: A robotic policy that reason to solve the task.☆309Updated 6 months ago
- ☆394Updated 8 months ago
- [ICML 2024] Official code repository for 3D embodied generalist agent LEO☆463Updated 5 months ago
- VoxPoser: Composable 3D Value Maps for Robotic Manipulation with Language Models☆736Updated 7 months ago
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"☆280Updated last year
- The official codebase for ManipLLM: Embodied Multimodal Large Language Model for Object-Centric Robotic Manipulation(cvpr 2024)☆141Updated last year
- The official repo for "SpatialBot: Precise Spatial Understanding with Vision Language Models.☆309Updated 3 weeks ago
- [CVPR 2025] RoboBrain: A Unified Brain Model for Robotic Manipulation from Abstract to Concrete. Official Repository.☆320Updated 3 months ago
- ☆83Updated 2 years ago
- The Official Implementation of RoboMatrix☆97Updated 4 months ago
- [IROS24 Oral]ManipVQA: Injecting Robotic Affordance and Physically Grounded Information into Multi-Modal Large Language Models☆98Updated last year
- Official Code for RVT-2 and RVT☆379Updated 7 months ago
- ☆219Updated last year
- A Survey of Embodied Learning for Object-Centric Robotic Manipulation☆239Updated last year
- A Foundational Vision-Language-Action Model for Synergizing Cognition and Action in Robotic Manipulation☆352Updated 4 months ago
- The repo of paper `RoboMamba: Multimodal State Space Model for Efficient Robot Reasoning and Manipulation`☆138Updated 9 months ago
- DexGraspVLA: A Vision-Language-Action Framework Towards General Dexterous Grasping☆386Updated last month
- [ICLR'25] LLaRA: Supercharging Robot Learning Data for Vision-Language Policy☆225Updated 6 months ago
- [CVPR 2025] The offical Implementation of "Universal Actions for Enhanced Embodied Foundation Models"☆202Updated 6 months ago
- OpenHelix: An Open-source Dual-System VLA Model for Robotic Manipulation☆281Updated last month
- "MimicPlay: Long-Horizon Imitation Learning by Watching Human Play" code repository☆293Updated last year
- Official repo of VLABench, a large scale benchmark designed for fairly evaluating VLA, Embodied Agent, and VLMs.☆298Updated 2 months ago