sierra-research / tau-benchLinks
Code and Data for Tau-Bench
☆713Updated 3 weeks ago
Alternatives and similar repositories for tau-bench
Users that are interested in tau-bench are comparing it to the libraries listed below
Sorting:
- xLAM: A Family of Large Action Models to Empower AI Agent Systems☆507Updated last week
- MLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineering☆815Updated last month
- Official codebase for "SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution"☆571Updated 4 months ago
- Automatic evals for LLMs☆488Updated last month
- Code repo for "WebArena: A Realistic Web Environment for Building Autonomous Agents"☆1,075Updated 5 months ago
- AWM: Agent Workflow Memory☆297Updated 6 months ago
- AgentLab: An open-source framework for developing, testing, and benchmarking web agents on diverse tasks, designed for scalability and re…☆372Updated this week
- AIDE: AI-Driven Exploration in the Space of Code. The machine Learning engineering agent that automates AI R&D.☆972Updated this week
- ☆518Updated last month
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]☆513Updated this week
- An agent benchmark with tasks in a simulated software company.☆509Updated this week
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"☆608Updated 2 weeks ago
- [NeurIPS 2022] 🛒WebShop: Towards Scalable Real-World Web Interaction with Grounded Language Agents☆379Updated 10 months ago
- ☆616Updated 6 months ago
- ☆240Updated last week
- ☆953Updated 6 months ago
- ☆191Updated 11 months ago
- 🌍 Repository for "AppWorld: A Controllable World of Apps and People for Benchmarking Interactive Coding Agent", ACL'24 Best Resource Pap…☆231Updated 2 months ago
- Code for Quiet-STaR☆735Updated 11 months ago
- [ICML 2024] Official repository for "Language Agent Tree Search Unifies Reasoning Acting and Planning in Language Models"☆766Updated last year
- LLMs can generate feedback on their work, use it to improve the output, and repeat this process iteratively.☆719Updated 9 months ago
- Scaling Data for SWE-agents☆328Updated this week
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆736Updated 4 months ago
- RewardBench: the first evaluation tool for reward models.☆619Updated last month
- Benchmarking long-form factuality in large language models. Original code for our paper "Long-form factuality in large language models".☆627Updated 2 weeks ago
- Official repo for the paper "Scaling Synthetic Data Creation with 1,000,000,000 Personas"☆1,249Updated 5 months ago
- ☆1,028Updated 7 months ago
- Official Repo for ICML 2024 paper "Executable Code Actions Elicit Better LLM Agents" by Xingyao Wang, Yangyi Chen, Lifan Yuan, Yizhe Zhan…☆1,310Updated last year
- SkyRL: A Modular Full-stack RL Library for LLMs☆679Updated this week
- An Analytical Evaluation Board of Multi-turn LLM Agents [NeurIPS 2024 Oral]☆332Updated last year