Zhoues / MineDreamerLinks
[IROS'25 Oral & NeurIPSw'24] Official implementation of "MineDreamer: Learning to Follow Instructions via Chain-of-Imagination for Simulated-World Control "
☆97Updated 6 months ago
Alternatives and similar repositories for MineDreamer
Users that are interested in MineDreamer are comparing it to the libraries listed below
Sorting:
- [CVPR2024] This is the official implement of MP5☆106Updated last year
- [ECCV 2024] STEVE in Minecraft is for See and Think: Embodied Agent in Virtual Environment☆39Updated last year
- [ICML 2025 Oral] Official repo of EmbodiedBench, a comprehensive benchmark designed to evaluate MLLMs as embodied agents.☆236Updated last month
- ☆46Updated 2 years ago
- Official implementation of "Self-Improving Video Generation"☆76Updated 7 months ago
- ☆118Updated 8 months ago
- ☆133Updated last year
- HAZARD challenge☆37Updated 7 months ago
- [NeurIPS 2024] Official Implementation for Optimus-1: Hybrid Multimodal Memory Empowered Agents Excel in Long-Horizon Tasks☆89Updated 6 months ago
- ☆111Updated 4 months ago
- Official implementation of paper "ROCKET-1: Mastering Open-World Interaction with Visual-Temporal Context Prompting" (CVPR'25)☆46Updated 8 months ago
- [IJCV] EgoPlan-Bench: Benchmarking Multimodal Large Language Models for Human-Level Planning☆75Updated last year
- GROOT: Learning to Follow Instructions by Watching Gameplay Videos (ICLR'24, Spotlight)☆65Updated last year
- Official repository for "RLVR-World: Training World Models with Reinforcement Learning" (NeurIPS 2025), https://arxiv.org/abs/2505.13934☆158Updated last month
- (VillagerAgent ACL 2024) A Graph based Minecraft multi agents framework☆82Updated 5 months ago
- Evaluate Multimodal LLMs as Embodied Agents☆54Updated 10 months ago
- Official implementation of WebVLN: Vision-and-Language Navigation on Websites☆30Updated last year
- [ECCV2024] 🐙Octopus, an embodied vision-language model trained with RLEF, emerging superior in embodied visual planning and programming.☆293Updated last year
- [ICLR 2025] Official implementation and benchmark evaluation repository of <PhysBench: Benchmarking and Enhancing Vision-Language Models …☆80Updated 6 months ago
- Source codes for the paper "COMBO: Compositional World Models for Embodied Multi-Agent Cooperation"☆43Updated 9 months ago
- Official repository for "iVideoGPT: Interactive VideoGPTs are Scalable World Models" (NeurIPS 2024), https://arxiv.org/abs/2405.15223☆160Updated 2 months ago
- Paper collections of the continuous effort start from World Models.☆190Updated last year
- [ICML 2024] The offical Implementation of "DecisionNCE: Embodied Multimodal Representations via Implicit Preference Learning"☆82Updated 6 months ago
- Official Implementation of "JARVIS-VLA: Post-Training Large-Scale Vision Language Models to Play Visual Games with Keyboards and Mouse"☆112Updated 3 months ago
- ☆55Updated last year
- Official code for the paper: Embodied Multi-Modal Agent trained by an LLM from a Parallel TextWorld☆60Updated last year
- [ICLR 2024 Spotlight] Text2Reward: Reward Shaping with Language Models for Reinforcement Learning☆191Updated 11 months ago
- OpenEQA Embodied Question Answering in the Era of Foundation Models☆333Updated last year
- LoTa-Bench: Benchmarking Language-oriented Task Planners for Embodied Agents (ICLR 2024)☆82Updated 6 months ago
- ☆88Updated last year