PKU-RL / Creative-AgentsLinks
☆46Updated 2 years ago
Alternatives and similar repositories for Creative-Agents
Users that are interested in Creative-Agents are comparing it to the libraries listed below
Sorting:
- GROOT: Learning to Follow Instructions by Watching Gameplay Videos (ICLR'24, Spotlight)☆67Updated 2 years ago
- [IROS'25 Oral & NeurIPSw'24] Official implementation of "MineDreamer: Learning to Follow Instructions via Chain-of-Imagination for Simula…☆100Updated 7 months ago
- ☆118Updated 9 months ago
- [ICLR 2024 Spotlight] Text2Reward: Reward Shaping with Language Models for Reinforcement Learning☆197Updated last year
- Official implementation of the DECKARD Agent from the paper "Do Embodied Agents Dream of Pixelated Sheep?"☆94Updated 2 years ago
- ☆99Updated last year
- [ICLR 2024] Source codes for the paper "Building Cooperative Embodied Agents Modularly with Large Language Models"☆291Updated 10 months ago
- ☆133Updated last year
- Implementation of "Describe, Explain, Plan and Select: Interactive Planning with Large Language Models Enables Open-World Multi-Task Agen…☆289Updated 2 years ago
- Official Repo of LangSuitE☆84Updated last year
- Uni-RLHF platform for "Uni-RLHF: Universal Platform and Benchmark Suite for Reinforcement Learning with Diverse Human Feedback" (ICLR2024…☆42Updated last year
- [CVPR2024] This is the official implement of MP5☆106Updated last year
- [ECCV2024] 🐙Octopus, an embodied vision-language model trained with RLEF, emerging superior in embodied visual planning and programming.☆294Updated last year
- Official implementation of paper "ROCKET-1: Mastering Open-World Interaction with Visual-Temporal Context Prompting" (CVPR'25)☆46Updated 9 months ago
- ☆33Updated last year
- SmartPlay is a benchmark for Large Language Models (LLMs). Uses a variety of games to test various important LLM capabilities as agents. …☆145Updated last year
- The official implementation of the paper "Read to Play (R2-Play): Decision Transformer with Multimodal Game Instruction".☆33Updated last year
- Source codes for the paper "COMBO: Compositional World Models for Embodied Multi-Agent Cooperation"☆45Updated 10 months ago
- ☆89Updated 2 months ago
- Code for "Interactive Task Planning with Language Models"☆33Updated 2 weeks ago
- [ECCV 2024] STEVE in Minecraft is for See and Think: Embodied Agent in Virtual Environment☆41Updated 2 years ago
- Official Implementation of "JARVIS-VLA: Post-Training Large-Scale Vision Language Models to Play Visual Games with Keyboards and Mouse"☆122Updated 5 months ago
- The source code of the paper "Leveraging Pre-trained Large Language Models to Construct and Utilize World Models for Model-based Task Pla…☆107Updated last year
- Evaluate Multimodal LLMs as Embodied Agents☆57Updated 11 months ago
- LoTa-Bench: Benchmarking Language-oriented Task Planners for Embodied Agents (ICLR 2024)☆86Updated 7 months ago
- [ICCV'23] LLM-Planner: Few-Shot Grounded Planning for Embodied Agents with Large Language Models☆213Updated 10 months ago
- Code for "Learning to Model the World with Language." ICML 2024 Oral.☆413Updated 3 weeks ago
- ☆108Updated 2 weeks ago
- [ICML 2025 Oral] Official repo of EmbodiedBench, a comprehensive benchmark designed to evaluate MLLMs as embodied agents.☆260Updated 3 months ago
- Implementation of "Open-World Multi-Task Control Through Goal-Aware Representation Learning and Adaptive Horizon Prediction"☆46Updated 2 years ago