microsoft / SmartPlayLinks
SmartPlay is a benchmark for Large Language Models (LLMs). Uses a variety of games to test various important LLM capabilities as agents. SmartPlay is designed to be easy to use, and to support future development of LLMs.
☆140Updated last year
Alternatives and similar repositories for SmartPlay
Users that are interested in SmartPlay are comparing it to the libraries listed below
Sorting:
- ScienceWorld is a text-based virtual environment centered around accomplishing tasks from the standardized elementary science curriculum.☆289Updated 2 months ago
- ☆100Updated last year
- Implementation of "Describe, Explain, Plan and Select: Interactive Planning with Large Language Models Enables Open-World Multi-Task Agen…☆284Updated 2 years ago
- Lamorel is a Python library designed for RL practitioners eager to use Large Language Models (LLMs).☆237Updated last week
- Official implementation of the DECKARD Agent from the paper "Do Embodied Agents Dream of Pixelated Sheep?"☆94Updated 2 years ago
- ☆144Updated last year
- ☆94Updated last year
- We perform functional grounding of LLMs' knowledge in BabyAI-Text☆273Updated last year
- Verlog: A Multi-turn RL framework for LLM agents☆38Updated last week
- ☆219Updated 2 years ago
- ☆62Updated 6 months ago
- AdaPlanner: Language Models for Decision Making via Adaptive Planning from Feedback☆119Updated 5 months ago
- Reasoning with Language Model is Planning with World Model☆170Updated 2 years ago
- Research Code for "ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL"☆189Updated 4 months ago
- Code for "Learning to Model the World with Language." ICML 2024 Oral.☆392Updated last year
- An OpenAI gym environment to evaluate the ability of LLMs (eg. GPT-4, Claude) in long-horizon reasoning and task planning in dynamic mult…☆70Updated 2 years ago
- Natural Language Reinforcement Learning☆96Updated last month
- Code for Contrastive Preference Learning (CPL)☆175Updated 9 months ago
- Code for Paper: Autonomous Evaluation and Refinement of Digital Agents [COLM 2024]☆143Updated 9 months ago
- Benchmarking Agentic LLM and VLM Reasoning On Games☆190Updated 3 weeks ago
- ☆111Updated 5 months ago
- ☆108Updated 2 months ago
- A benchmark for evaluating learning agents based on just language feedback☆88Updated 3 months ago
- Official Repo of LangSuitE☆84Updated last year
- Implementation of "Open-World Multi-Task Control Through Goal-Aware Representation Learning and Adaptive Horizon Prediction"☆46Updated 2 years ago
- ☆116Updated 7 months ago
- The source code of the paper "Leveraging Pre-trained Large Language Models to Construct and Utilize World Models for Model-based Task Pla…☆100Updated last year
- [ICLR 2024] Trajectory-as-Exemplar Prompting with Memory for Computer Control☆60Updated 8 months ago
- ☆44Updated last year
- An extensible benchmark for evaluating large language models on planning☆403Updated 2 months ago