PKU-RL / Creative-AgentsLinks
☆44Updated last year
Alternatives and similar repositories for Creative-Agents
Users that are interested in Creative-Agents are comparing it to the libraries listed below
Sorting:
- GROOT: Learning to Follow Instructions by Watching Gameplay Videos (ICLR 2024 Spotlight)☆66Updated last year
- [IROS'25 Oral & NeurIPSw'24] Official implementation of "MineDreamer: Learning to Follow Instructions via Chain-of-Imagination for Simula…☆95Updated 3 months ago
- [ICLR 2024 Spotlight] Code for the paper "Text2Reward: Reward Shaping with Language Models for Reinforcement Learning"☆176Updated 9 months ago
- ☆111Updated 5 months ago
- Official implementation of the DECKARD Agent from the paper "Do Embodied Agents Dream of Pixelated Sheep?"☆94Updated 2 years ago
- ☆94Updated last year
- [ICLR 2024] Source codes for the paper "Building Cooperative Embodied Agents Modularly with Large Language Models"☆270Updated 5 months ago
- Official implementation of paper "ROCKET-1: Mastering Open-World Interaction with Visual-Temporal Context Prompting" (CVPR 2025)☆43Updated 5 months ago
- Official Repo of LangSuitE☆84Updated last year
- SmartPlay is a benchmark for Large Language Models (LLMs). Uses a variety of games to test various important LLM capabilities as agents. …☆140Updated last year
- Uni-RLHF platform for "Uni-RLHF: Universal Platform and Benchmark Suite for Reinforcement Learning with Diverse Human Feedback" (ICLR2024…☆41Updated 9 months ago
- ☆130Updated last year
- Implementation of "Describe, Explain, Plan and Select: Interactive Planning with Large Language Models Enables Open-World Multi-Task Agen…☆284Updated 2 years ago
- Implementation of "Open-World Multi-Task Control Through Goal-Aware Representation Learning and Adaptive Horizon Prediction"☆46Updated 2 years ago
- Source codes for the paper "COMBO: Compositional World Models for Embodied Multi-Agent Cooperation"☆40Updated 6 months ago
- [CVPR2024] This is the official implement of MP5☆103Updated last year
- [ECCV2024] 🐙Octopus, an embodied vision-language model trained with RLEF, emerging superior in embodied visual planning and programming.☆293Updated last year
- [ECCV 2024] STEVE in Minecraft is for See and Think: Embodied Agent in Virtual Environment☆39Updated last year
- Simulating Large-Scale Multi-Agent Interactions with Limited Multimodal Senses and Physical Needs☆95Updated 5 months ago
- The source code of the paper "Leveraging Pre-trained Large Language Models to Construct and Utilize World Models for Model-based Task Pla…☆99Updated last year
- Code for "Interactive Task Planning with Language Models"☆32Updated 4 months ago
- The official implementation of the paper "Read to Play (R2-Play): Decision Transformer with Multimodal Game Instruction".☆33Updated last year
- Code for "Learning to Model the World with Language." ICML 2024 Oral.☆392Updated last year
- [ICCV'23] LLM-Planner: Few-Shot Grounded Planning for Embodied Agents with Large Language Models☆201Updated 5 months ago
- AAAI24(Oral) ProAgent: Building Proactive Cooperative Agents with Large Language Models☆90Updated 6 months ago
- [NeurIPS 2024] Official Implementation for Optimus-1: Hybrid Multimodal Memory Empowered Agents Excel in Long-Horizon Tasks☆83Updated 3 months ago
- ☆31Updated 11 months ago
- Evaluate Multimodal LLMs as Embodied Agents☆54Updated 7 months ago
- An RL-Friendly Vision-Language Model for Minecraft☆36Updated 11 months ago
- Official repository for "RLVR-World: Training World Models with Reinforcement Learning", https://arxiv.org/abs/2505.13934☆85Updated 3 months ago