τ²-Bench: Evaluating Conversational Agents in a Dual-Control Environment
☆800Feb 11, 2026Updated 3 weeks ago
Alternatives and similar repositories for tau2-bench
Users that are interested in tau2-bench are comparing it to the libraries listed below
Sorting:
- Code and Data for Tau-Bench☆1,114Aug 28, 2025Updated 6 months ago
- ☆171Oct 29, 2025Updated 4 months ago
- A new tool learning benchmark aiming at well-balanced stability and reality, based on ToolBench.☆220Apr 15, 2025Updated 10 months ago
- Search-R1: An Efficient, Scalable RL Training Framework for Reasoning & Search Engine Calling interleaved LLM based on veRL☆4,135Nov 13, 2025Updated 3 months ago
- SkyRL: A Modular Full-stack RL Library for LLMs☆1,656Updated this week
- ☆26Jul 29, 2025Updated 7 months ago
- A Challenge on Dialog Systems with Retrieval Augmented Generation (FutureDial-RAG), Co-located with SLT2024 FutureDial-RAG Challenge☆11Aug 10, 2024Updated last year
- MLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineering☆1,341Feb 26, 2026Updated last week
- A dataset for training and evaluating LLMs on decision making about "when (not) to call" functions☆55Apr 29, 2025Updated 10 months ago
- Complex Function Calling Benchmark.☆165Jan 20, 2025Updated last year
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]☆649Jul 29, 2025Updated 7 months ago
- Feedback-Driven Tool-Use Improvements in Large Language Models via Automated Build Environments☆48Jan 8, 2026Updated 2 months ago
- verl: Volcano Engine Reinforcement Learning for LLMs☆19,739Updated this week
- [COLING 2025] NesTools: A Dataset for Evaluating Nested Tool Learning Abilities of Large Language Models☆18Jan 18, 2025Updated last year
- Meta Agents Research Environments is a comprehensive platform designed to evaluate AI agents in dynamic, realistic scenarios. Unlike stat…☆447Jan 23, 2026Updated last month
- [ACL2024] T-Eval: Evaluating Tool Utilization Capability of Large Language Models Step by Step☆304Apr 3, 2024Updated last year
- ☆335May 24, 2025Updated 9 months ago
- ☆4,390Jul 31, 2025Updated 7 months ago
- ☆28Jun 5, 2025Updated 9 months ago
- ☆14Apr 16, 2025Updated 10 months ago
- [NeurIPS 2024 D&B Track] GTA: A Benchmark for General Tool Agents☆136Feb 16, 2026Updated 3 weeks ago
- xLAM: A Family of Large Action Models to Empower AI Agent Systems☆602Aug 21, 2025Updated 6 months ago
- [arxiv: 2512.19673] Bottom-up Policy Optimization: Your Language Model Policy Secretly Contains Internal Policies☆61Feb 6, 2026Updated last month
- A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)☆3,211Feb 8, 2026Updated last month
- Code for Paper: Autonomous Evaluation and Refinement of Digital Agents [COLM 2024]☆148Nov 26, 2024Updated last year
- [NeurIPS 2024] A comprehensive benchmark for evaluating critique ability of LLMs☆49Nov 29, 2024Updated last year
- Simple RL training for reasoning☆3,830Dec 23, 2025Updated 2 months ago
- ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)☆692Jan 20, 2025Updated last year
- OneEdit: A Neural-Symbolic Collaboratively Knowledge Editing System.☆19Oct 14, 2024Updated last year
- AgentSynth: Scalable Task Generation for Generalist Computer-Use Agents☆37Oct 7, 2025Updated 5 months ago
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆184May 20, 2025Updated 9 months ago
- C^3-Bench: The Things Real Disturbing LLM based Agent in Multi-Tasking☆37Mar 1, 2026Updated last week
- RewardBench: the first evaluation tool for reward models.☆702Feb 16, 2026Updated 3 weeks ago
- Understanding R1-Zero-Like Training: A Critical Perspective☆1,222Aug 27, 2025Updated 6 months ago
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & TIS & vLLM & Ray & Async RL)☆9,084Updated this week
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆946Feb 16, 2025Updated last year
- Building Open LLM Web Agents with Self-Evolving Online Curriculum RL☆512Jun 6, 2025Updated 9 months ago
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆2,324Updated this week
- LiveMCPBench is a benchmark for evaluating the ability of agents to navigate and utilize a large-scale MCP toolset. It provides a compreh…☆93Dec 18, 2025Updated 2 months ago