icip-cas / LiveMCPBenchLinks
LiveMCPBench is a benchmark for evaluating the ability of agents to navigate and utilize a large-scale MCP toolset. It provides a comprehensive set of tasks that challenge agents to effectively use various tools in daily scenarios.
☆92Updated last month
Alternatives and similar repositories for LiveMCPBench
Users that are interested in LiveMCPBench are comparing it to the libraries listed below
Sorting:
- ☆133Updated last month
- Complex Function Calling Benchmark.☆165Updated last year
- Resources for our paper: "Agent-R: Training Language Model Agents to Reflect via Iterative Self-Training"☆169Updated 3 months ago
- [ACL 2025] Agentic Reward Modeling: Integrating Human Preferences with Verifiable Correctness Signals for Reliable Reward Systems☆125Updated 8 months ago
- Meta Agents Research Environments is a comprehensive platform designed to evaluate AI agents in dynamic, realistic scenarios. Unlike stat…☆427Updated 3 weeks ago
- [NeurIPS 2024] Spider2-V: How Far Are Multimodal Agents From Automating Data Science and Engineering Workflows?☆136Updated last year
- Data Synthesis for Deep Research Based on Semi-Structured Data☆198Updated last month
- A dataset for training and evaluating LLMs on decision making about "when (not) to call" functions☆55Updated 9 months ago
- ☆169Updated 4 months ago
- (ACL 2025 Main) Code for MultiAgentBench : Evaluating the Collaboration and Competition of LLM agents https://www.arxiv.org/pdf/2503.019…☆217Updated 3 months ago
- DeepDive: Advancing Deep Search Agents with Knowledge Graphs and Multi-Turn RL☆281Updated 4 months ago
- Dynamic Cheatsheet: Test-Time Learning with Adaptive Memory☆249Updated 8 months ago
- Implementation for OAgents: An Empirical Study of Building Effective Agents☆306Updated 4 months ago
- [ICML 2025] Satori: Reinforcement Learning with Chain-of-Action-Thought Enhances LLM Reasoning via Autoregressive Search☆108Updated 8 months ago
- Hammer: Robust Function-Calling for On-Device Language Models via Function Masking☆112Updated 8 months ago
- ☆108Updated 2 months ago
- ☆52Updated 8 months ago
- The raw UserRL repo under construction☆94Updated 4 months ago
- Framework and toolkits for building and evaluating collaborative agents that can work together with humans.☆121Updated 2 months ago
- SWE-Swiss: A Multi-Task Fine-Tuning and RL Recipe for High-Performance Issue Resolution☆104Updated 4 months ago
- DeepResearch Bench: A Comprehensive Benchmark for Deep Research Agents☆576Updated this week
- Code that accompanies the public release of the paper Lost in Conversation (https://arxiv.org/abs/2505.06120)☆206Updated 7 months ago
- AutoCoA (Automatic generation of Chain-of-Action) is an agent model framework that enhances the multi-turn tool usage capability of reaso…☆130Updated 10 months ago
- ☆108Updated last year
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasks☆261Updated 9 months ago
- LIMI: Less is More for Agency☆160Updated 3 months ago
- [NeurIPS'25 D&B] Mind2Web-2 Benchmark: Evaluating Agentic Search with Agent-as-a-Judge☆98Updated last month
- [ICLR 2025] DSBench: How Far are Data Science Agents from Becoming Data Science Experts?☆103Updated 5 months ago
- MTU-Bench: A Multi-granularity Tool-Use Benchmark for Large Language Models☆58Updated 6 months ago
- MemEvolve & EvolveLab☆165Updated last month