microsoft / LLF-BenchLinks
A benchmark for evaluating learning agents based on just language feedback
☆88Updated 3 months ago
Alternatives and similar repositories for LLF-Bench
Users that are interested in LLF-Bench are comparing it to the libraries listed below
Sorting:
- SmartPlay is a benchmark for Large Language Models (LLMs). Uses a variety of games to test various important LLM capabilities as agents. …☆140Updated last year
- ☆101Updated last year
- ☆219Updated 2 years ago
- ScienceWorld is a text-based virtual environment centered around accomplishing tasks from the standardized elementary science curriculum.☆292Updated 2 months ago
- ☆23Updated 6 months ago
- Intelligent Go-Explore: Standing on the Shoulders of Giant Foundation Models☆64Updated 6 months ago
- Code for Paper: Autonomous Evaluation and Refinement of Digital Agents [COLM 2024]☆143Updated 9 months ago
- ☆86Updated last year
- ☆144Updated last year
- DialOp: Decision-oriented dialogue environments for collaborative language agents☆109Updated 10 months ago
- AdaPlanner: Language Models for Decision Making via Adaptive Planning from Feedback☆120Updated 5 months ago
- An OpenAI gym environment to evaluate the ability of LLMs (eg. GPT-4, Claude) in long-horizon reasoning and task planning in dynamic mult…☆70Updated 2 years ago
- Repository for the paper Stream of Search: Learning to Search in Language☆150Updated 7 months ago
- Reasoning with Language Model is Planning with World Model☆171Updated 2 years ago
- Official implementation of the DECKARD Agent from the paper "Do Embodied Agents Dream of Pixelated Sheep?"