Zhoues / MineDreamerLinks
[IROS'25 Oral & NeurIPSw'24] Official implementation of "MineDreamer: Learning to Follow Instructions via Chain-of-Imagination for Simulated-World Control "
☆95Updated 3 months ago
Alternatives and similar repositories for MineDreamer
Users that are interested in MineDreamer are comparing it to the libraries listed below
Sorting:
- [CVPR2024] This is the official implement of MP5☆104Updated last year
- [ECCV 2024] STEVE in Minecraft is for See and Think: Embodied Agent in Virtual Environment☆39Updated last year
- Official implementation of paper "ROCKET-1: Mastering Open-World Interaction with Visual-Temporal Context Prompting" (CVPR 2025)☆45Updated 5 months ago
- [ICML 2025 Oral] Official repo of EmbodiedBench, a comprehensive benchmark designed to evaluate MLLMs as embodied agents.☆190Updated 2 months ago
- ☆45Updated last year
- ☆112Updated 6 months ago
- Official repository for "RLVR-World: Training World Models with Reinforcement Learning" (NeurIPS 2025), https://arxiv.org/abs/2505.13934☆108Updated 2 weeks ago
- Official Implementation of "JARVIS-VLA: Post-Training Large-Scale Vision Language Models to Play Visual Games with Keyboards and Mouse"☆100Updated last month
- GROOT: Learning to Follow Instructions by Watching Gameplay Videos (ICLR 2024 Spotlight)☆66Updated last year
- Official implementation of "Self-Improving Video Generation"☆72Updated 5 months ago
- ☆131Updated last year
- (VillagerAgent ACL 2024) A Graph based Minecraft multi agents framework☆78Updated 3 months ago
- ☆88Updated 2 months ago
- Evaluate Multimodal LLMs as Embodied Agents☆54Updated 7 months ago
- [NeurIPS 2024] Official Implementation for Optimus-1: Hybrid Multimodal Memory Empowered Agents Excel in Long-Horizon Tasks☆84Updated 3 months ago
- HAZARD challenge☆36Updated 5 months ago
- [IJCV] EgoPlan-Bench: Benchmarking Multimodal Large Language Models for Human-Level Planning☆73Updated 10 months ago
- Paper collections of the continuous effort start from World Models.☆185Updated last year
- [ECCV2024] 🐙Octopus, an embodied vision-language model trained with RLEF, emerging superior in embodied visual planning and programming.☆292Updated last year
- Official repository for "iVideoGPT: Interactive VideoGPTs are Scalable World Models" (NeurIPS 2024), https://arxiv.org/abs/2405.15223☆152Updated 2 weeks ago
- [ICML 2024] The offical Implementation of "DecisionNCE: Embodied Multimodal Representations via Implicit Preference Learning"☆80Updated 4 months ago
- Official implementation of WebVLN: Vision-and-Language Navigation on Websites☆29Updated last year
- OpenEQA Embodied Question Answering in the Era of Foundation Models☆319Updated last year
- [ICLR 2025] Official implementation and benchmark evaluation repository of <PhysBench: Benchmarking and Enhancing Vision-Language Models …☆72Updated 4 months ago
- Machine Mental Imagery: Empower Multimodal Reasoning with Latent Visual Tokens (arXiv 2025)☆168Updated 2 months ago
- LoTa-Bench: Benchmarking Language-oriented Task Planners for Embodied Agents (ICLR 2024)☆79Updated 4 months ago
- Source codes for the paper "COMBO: Compositional World Models for Embodied Multi-Agent Cooperation"☆41Updated 6 months ago
- [ICML'25] The PyTorch implementation of paper: "AdaWorld: Learning Adaptable World Models with Latent Actions".☆161Updated 3 months ago
- [ICLR 2024 Spotlight] Text2Reward: Reward Shaping with Language Models for Reinforcement Learning☆180Updated 9 months ago
- Code release for "Pre-training Contextualized World Models with In-the-wild Videos for Reinforcement Learning" (NeurIPS 2023), https://ar…☆67Updated last year