zai-org / ComplexFuncBenchLinks
Complex Function Calling Benchmark.
☆163Updated last year
Alternatives and similar repositories for ComplexFuncBench
Users that are interested in ComplexFuncBench are comparing it to the libraries listed below
Sorting:
- Official repository for paper "ReasonIR Training Retrievers for Reasoning Tasks".☆216Updated 7 months ago
- LongEmbed: Extending Embedding Models for Long Context Retrieval (EMNLP 2024)☆145Updated last year
- LOFT: A 1 Million+ Token Long-Context Benchmark☆225Updated 7 months ago
- Benchmarking LLMs with Challenging Tasks from Real Users☆245Updated last year
- BrowseComp-Plus: A More Fair and Transparent Evaluation Benchmark of Deep-Research Agent☆164Updated last month
- ☆98Updated 3 weeks ago
- ☆236Updated 2 months ago
- ☆107Updated last year
- Official Code Repository for the paper "Distilling LLM Agent into Small Models with Retrieval and Code Tools"☆195Updated 3 months ago
- Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".☆223Updated last month
- Code repo for "Agent Instructs Large Language Models to be General Zero-Shot Reasoners"☆120Updated 3 months ago
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate" [COLM 2025]☆178Updated 6 months ago
- 🚢 Data Toolkit for Sailor Language Models☆95Updated 11 months ago
- Reproducible, flexible LLM evaluations☆331Updated this week
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆148Updated last year
- Dynamic Cheatsheet: Test-Time Learning with Adaptive Memory☆246Updated 8 months ago
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆249Updated last year
- A dataset for training and evaluating LLMs on decision making about "when (not) to call" functions☆50Updated 9 months ago
- Meta Agents Research Environments is a comprehensive platform designed to evaluate AI agents in dynamic, realistic scenarios. Unlike stat…☆418Updated last week
- ☆131Updated 8 months ago
- ☆129Updated last year
- ☆92Updated 8 months ago
- General Reasoner: Advancing LLM Reasoning Across All Domains [NeurIPS25]☆214Updated 2 months ago
- The official evaluation suite and dynamic data release for MixEval.☆254Updated last year
- Code for the EMNLP 2024 paper "Detecting and Mitigating Contextual Hallucinations in Large Language Models Using Only Attention Maps"☆142Updated 3 months ago
- ☆74Updated 11 months ago
- Scalable Meta-Evaluation of LLMs as Evaluators☆43Updated last year
- ☆168Updated 3 months ago
- The HELMET Benchmark☆198Updated last month
- Framework and toolkits for building and evaluating collaborative agents that can work together with humans.☆120Updated last month