sierra-research / tau-bench
Code and Data for Tau-Bench
☆346Updated 2 months ago
Alternatives and similar repositories for tau-bench:
Users that are interested in tau-bench are comparing it to the libraries listed below
- AWM: Agent Workflow Memory☆252Updated last month
- ☆374Updated 2 months ago
- An Analytical Evaluation Board of Multi-turn LLM Agents [NeurIPS 2024 Oral]☆291Updated 10 months ago
- Code for the paper 🌳 Tree Search for Language Model Agents☆185Updated 7 months ago
- AgentLab: An open-source framework for developing, testing, and benchmarking web agents on diverse tasks, designed for scalability and re…☆275Updated this week
- ☆367Updated last month
- 🌍 Repository for "AppWorld: A Controllable World of Apps and People for Benchmarking Interactive Coding Agent", ACL'24 Best Resource Pap…☆162Updated 3 months ago
- TapeAgents is a framework that facilitates all stages of the LLM Agent development lifecycle☆237Updated this week
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym☆402Updated last week
- Data and code for FreshLLMs (https://arxiv.org/abs/2310.03214)☆349Updated last week
- A simple unified framework for evaluating LLMs☆204Updated 2 weeks ago
- Code repo for "WebArena: A Realistic Web Environment for Building Autonomous Agents"☆921Updated last month
- Code for Husky, an open-source language agent that solves complex, multi-step reasoning tasks. Husky v1 addresses numerical, tabular and …☆338Updated 9 months ago
- Automatic evals for LLMs☆334Updated this week
- ☆160Updated 7 months ago
- A comprehensive repository of reasoning tasks for LLMs (and beyond)☆423Updated 5 months ago
- 🤠 Agent-as-a-Judge and DevAI dataset☆375Updated 2 months ago
- VisualWebArena is a benchmark for multimodal agents.☆318Updated 4 months ago
- [NeurIPS 2023 D&B] Code repository for InterCode benchmark https://arxiv.org/abs/2306.14898☆209Updated 10 months ago
- This is a collection of resources for computer-use GUI agents, including videos, blogs, papers, and projects.☆289Updated last week
- Attribute (or cite) statements generated by LLMs back to in-context information.☆218Updated 5 months ago
- AIDE: AI-Driven Exploration in the Space of Code. State of the Art machine Learning engineering agents that automates AI R&D.☆803Updated 3 weeks ago
- ☆583Updated 2 months ago
- Code repo for "Agent Instructs Large Language Models to be General Zero-Shot Reasoners"☆104Updated 6 months ago
- MLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineering☆645Updated 2 months ago
- RewardBench: the first evaluation tool for reward models.☆526Updated 3 weeks ago
- An agent benchmark with tasks in a simulated software company.☆268Updated this week
- Beating the GAIA benchmark with Transformers Agents. 🚀☆103Updated last month
- Code and data for "Lumos: Learning Agents with Unified Data, Modular Design, and Open-Source LLMs"☆463Updated last year