cnsdqd-dyb / VillagerAgent-Minecraft-multiagent-frameworkLinks
(VillagerAgent ACL 2024) A Graph based Minecraft multi agents framework
☆67Updated last month
Alternatives and similar repositories for VillagerAgent-Minecraft-multiagent-framework
Users that are interested in VillagerAgent-Minecraft-multiagent-framework are comparing it to the libraries listed below
Sorting:
- [IROS'25 Oral & NeurIPSw'24] Official implementation of "MineDreamer: Learning to Follow Instructions via Chain-of-Imagination for Simula…☆91Updated last month
- [NeurIPS 2024] Official Implementation for Optimus-1: Hybrid Multimodal Memory Empowered Agents Excel in Long-Horizon Tasks☆78Updated last month
- ☆44Updated last year
- [ECCV 2024] STEVE in Minecraft is for See and Think: Embodied Agent in Virtual Environment☆39Updated last year
- [CVPR2024] This is the official implement of MP5☆103Updated last year
- ☆109Updated 3 months ago
- Official implementation of paper "ROCKET-1: Mastering Open-World Interaction with Visual-Temporal Context Prompting" (CVPR 2025)☆42Updated 3 months ago
- [ICML 2025 Oral] Official repo of EmbodiedBench, a comprehensive benchmark designed to evaluate MLLMs as embodied agents.☆163Updated 3 weeks ago
- ☆92Updated last year
- Official implementation of "Self-Improving Video Generation"☆66Updated 3 months ago
- Official code for the paper: WALL-E: World Alignment by NeuroSymbolic Learning improves World Model-based LLM Agents☆38Updated 2 months ago
- ☆37Updated 4 months ago
- GROOT: Learning to Follow Instructions by Watching Gameplay Videos (ICLR 2024 Spotlight)☆66Updated last year
- ☆60Updated 5 months ago
- [ECCV2024] 🐙Octopus, an embodied vision-language model trained with RLEF, emerging superior in embodied visual planning and programming.☆290Updated last year
- ☆69Updated 2 weeks ago
- Official repository for "RLVR-World: Training World Models with Reinforcement Learning", https://arxiv.org/abs/2505.13934☆70Updated last month
- Paper collections of the continuous effort start from World Models.☆179Updated last year
- [ICLR 2024] Source codes for the paper "Building Cooperative Embodied Agents Modularly with Large Language Models"☆265Updated 4 months ago
- ☆131Updated last year
- ☆47Updated 2 months ago
- Official Implementation of "JARVIS-VLA: Post-Training Large-Scale Vision Language Models to Play Visual Games with Keyboards and Mouse"☆88Updated 2 months ago
- Source codes for the paper "COMBO: Compositional World Models for Embodied Multi-Agent Cooperation"☆38Updated 4 months ago
- Code for NeurIPS 2024 paper "AutoManual: Constructing Instruction Manuals by LLM Agents via Interactive Environmental Learning"☆43Updated 8 months ago
- Evaluate Multimodal LLMs as Embodied Agents☆52Updated 5 months ago
- Multimodal RewardBench☆42Updated 5 months ago
- [NeurIPS 2024] A task generation and model evaluation system for multimodal language models.☆72Updated 8 months ago
- Official Implementation of ARPO: End-to-End Policy Optimization for GUI Agents with Experience Replay☆99Updated 2 months ago
- Simulating Large-Scale Multi-Agent Interactions with Limited Multimodal Senses and Physical Needs☆88Updated 4 months ago
- Towards Large Multimodal Models as Visual Foundation Agents☆225Updated 3 months ago