IranQin / MP5Links
[CVPR2024] This is the official implement of MP5
☆106Updated last year
Alternatives and similar repositories for MP5
Users that are interested in MP5 are comparing it to the libraries listed below
Sorting:
- [IROS'25 Oral & NeurIPSw'24] Official implementation of "MineDreamer: Learning to Follow Instructions via Chain-of-Imagination for Simula…☆97Updated 5 months ago
- [ICML 2025 Oral] Official repo of EmbodiedBench, a comprehensive benchmark designed to evaluate MLLMs as embodied agents.☆214Updated last month
- Embodied-Reasoner: Synergizing Visual Search, Reasoning, and Action for Embodied Interactive Tasks☆179Updated last month
- [IJCV] EgoPlan-Bench: Benchmarking Multimodal Large Language Models for Human-Level Planning☆74Updated 11 months ago
- ☆54Updated last year
- Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning☆76Updated 6 months ago
- Evaluate Multimodal LLMs as Embodied Agents☆54Updated 9 months ago
- [ECCV 2024] STEVE in Minecraft is for See and Think: Embodied Agent in Virtual Environment☆39Updated last year
- [NeurIPS 2025] Official implementation of "RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics"☆195Updated 3 weeks ago
- Latest Advances on Embodied Multimodal LLMs (or Vison-Language-Action Models).☆121Updated last year
- The repo of paper `RoboMamba: Multimodal State Space Model for Efficient Robot Reasoning and Manipulation`☆139Updated 11 months ago
- Official repository for "RLVR-World: Training World Models with Reinforcement Learning" (NeurIPS 2025), https://arxiv.org/abs/2505.13934☆135Updated 3 weeks ago
- Official code for "Embodied-R1: Reinforced Embodied Reasoning for General Robotic Manipulation"☆102Updated 3 months ago
- [NeurIPS-2024] The offical Implementation of "Instruction-Guided Visual Masking"☆39Updated last year
- Official code for the paper: Embodied Multi-Modal Agent trained by an LLM from a Parallel TextWorld☆59Updated last year
- ☆104Updated 3 months ago
- ☆59Updated 7 months ago
- Official repository for "iVideoGPT: Interactive VideoGPTs are Scalable World Models" (NeurIPS 2024), https://arxiv.org/abs/2405.15223☆157Updated last month
- [arXiv 2023] Embodied Task Planning with Large Language Models☆192Updated 2 years ago
- InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policy☆269Updated last week
- Official code of paper "DeeR-VLA: Dynamic Inference of Multimodal Large Language Models for Efficient Robot Execution"☆117Updated 9 months ago
- Code for MultiPLY: A Multisensory Object-Centric Embodied Large Language Model in 3D World☆134Updated last year
- Official repo of VLABench, a large scale benchmark designed for fairly evaluating VLA, Embodied Agent, and VLMs.☆332Updated last week
- [World-Model-Survey-2024] Paper list and projects for World Model☆15Updated last year
- Visual Embodied Brain: Let Multimodal Large Language Models See, Think, and Control in Spaces☆85Updated 5 months ago
- [ICCV2025 Oral] Latent Motion Token as the Bridging Language for Learning Robot Manipulation from Videos☆147Updated last month
- OpenEQA Embodied Question Answering in the Era of Foundation Models☆330Updated last year
- 🦾 A Dual-System VLA with System2 Thinking☆116Updated 3 months ago
- The official repo for "SpatialBot: Precise Spatial Understanding with Vision Language Models.☆317Updated 2 months ago
- Unified Vision-Language-Action Model☆226Updated last month