sierra-research / tau-benchLinks
Code and Data for Tau-Bench
☆657Updated 5 months ago
Alternatives and similar repositories for tau-bench
Users that are interested in tau-bench are comparing it to the libraries listed below
Sorting:
- xLAM: A Family of Large Action Models to Empower AI Agent Systems☆482Updated 3 weeks ago
- Automatic evals for LLMs☆461Updated 2 weeks ago
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]☆498Updated 2 months ago
- AWM: Agent Workflow Memory☆288Updated 5 months ago
- MLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineering☆795Updated 3 weeks ago
- ☆482Updated 2 weeks ago
- Official codebase for "SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution"☆566Updated 3 months ago
- AgentLab: An open-source framework for developing, testing, and benchmarking web agents on diverse tasks, designed for scalability and re…☆358Updated this week
- ☆184Updated 11 months ago
- TapeAgents is a framework that facilitates all stages of the LLM Agent development lifecycle☆286Updated 2 weeks ago
- 🌍 Repository for "AppWorld: A Controllable World of Apps and People for Benchmarking Interactive Coding Agent", ACL'24 Best Resource Pap…☆221Updated 2 months ago
- [NeurIPS 2022] 🛒WebShop: Towards Scalable Real-World Web Interaction with Grounded Language Agents☆364Updated 10 months ago
- [ICML 2024] Official repository for "Language Agent Tree Search Unifies Reasoning Acting and Planning in Language Models"☆766Updated 11 months ago
- ☆611Updated 5 months ago
- Code repo for "WebArena: A Realistic Web Environment for Building Autonomous Agents"☆1,048Updated 5 months ago
- An agent benchmark with tasks in a simulated software company.☆468Updated 2 weeks ago
- ☆228Updated last month
- Scaling Data for SWE-agents☆283Updated this week
- Benchmarking long-form factuality in large language models. Original code for our paper "Long-form factuality in large language models".☆621Updated this week
- VisualWebArena is a benchmark for multimodal agents.☆357Updated 8 months ago
- LLMs can generate feedback on their work, use it to improve the output, and repeat this process iteratively.☆711Updated 9 months ago
- [NeurIPS 2024 Spotlight] Buffer of Thoughts: Thought-Augmented Reasoning with Large Language Models☆643Updated 2 weeks ago
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"☆582Updated 3 weeks ago
- This is a collection of resources for computer-use GUI agents, including videos, blogs, papers, and projects.☆397Updated last month
- 🌎💪 BrowserGym, a Gym environment for web task automation☆806Updated last week
- Data and code for FreshLLMs (https://arxiv.org/abs/2310.03214)☆363Updated this week
- AIDE: AI-Driven Exploration in the Space of Code. The machine Learning engineering agent that automates AI R&D.☆949Updated this week
- Code and data for "Lumos: Learning Agents with Unified Data, Modular Design, and Open-Source LLMs"☆465Updated last year
- ☆609Updated last month
- Code and implementations for the paper "AgentGym: Evolving Large Language Model-based Agents across Diverse Environments" by Zhiheng Xi e…☆499Updated 4 months ago