lizaijing / Awesome-Minecraft-AgentLinks
Paper List of Minecraft Agents
☆40Updated 3 months ago
Alternatives and similar repositories for Awesome-Minecraft-Agent
Users that are interested in Awesome-Minecraft-Agent are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2024] Official Implementation for Optimus-1: Hybrid Multimodal Memory Empowered Agents Excel in Long-Horizon Tasks☆78Updated last month
- Official repo for paper DigiRL: Training In-The-Wild Device-Control Agents with Autonomous Reinforcement Learning.☆366Updated 4 months ago
- ☆107Updated 3 months ago
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasks☆226Updated 2 months ago
- Code for the paper 🌳 Tree Search for Language Model Agents☆205Updated 11 months ago
- SmartPlay is a benchmark for Large Language Models (LLMs). Uses a variety of games to test various important LLM capabilities as agents. …☆140Updated last year
- Towards Large Multimodal Models as Visual Foundation Agents☆223Updated 2 months ago
- ☆91Updated last year
- Code for the paper: "Learning to Reason without External Rewards"☆325Updated last week
- Code for Paper: Autonomous Evaluation and Refinement of Digital Agents [COLM 2024]☆138Updated 7 months ago
- Implementation of "Describe, Explain, Plan and Select: Interactive Planning with Large Language Models Enables Open-World Multi-Task Agen…☆280Updated last year
- ☆210Updated 4 months ago
- "Is Your LLM Secretly a World Model of the Internet? Model-Based Planning for Web Agents"☆78Updated 3 months ago
- ☆186Updated this week
- ☆61Updated 4 months ago
- [IROS'25 Oral & NeurIPSw'24] Official implementation of "MineDreamer: Learning to Follow Instructions via Chain-of-Imagination for Simula…☆91Updated last month
- Text world based on Minecraft rules.☆15Updated last year
- Codes for Visual Sketchpad: Sketching as a Visual Chain of Thought for Multimodal Language Models☆245Updated 8 months ago
- Simulating Large-Scale Multi-Agent Interactions with Limited Multimodal Senses and Physical Needs☆88Updated 3 months ago
- Code for NeurIPS 2024 paper "AutoManual: Constructing Instruction Manuals by LLM Agents via Interactive Environmental Learning"☆43Updated 8 months ago
- Odyssey: Empowering Minecraft Agents with Open-World Skills☆317Updated last month
- This repo is a live list of papers on game playing and large multimodality model - "A Survey on Game Playing Agents and Large Models: Met…☆148Updated 10 months ago
- ☆64Updated last month
- ☆44Updated last year
- This repository contains a LLM benchmark for the social deduction game `Resistance Avalon'☆118Updated last month
- [ICLR 2024] Source codes for the paper "Building Cooperative Embodied Agents Modularly with Large Language Models"☆264Updated 3 months ago
- [ACL 2025] Code and data for OS-Genesis: Automating GUI Agent Trajectory Construction via Reverse Task Synthesis☆147Updated this week
- GROOT: Learning to Follow Instructions by Watching Gameplay Videos (ICLR 2024 Spotlight)☆65Updated last year
- Building Open LLM Web Agents with Self-Evolving Online Curriculum RL☆422Updated last month
- ☆20Updated 3 months ago