zai-org / ComplexFuncBenchLinks
Complex Function Calling Benchmark.
☆139Updated 9 months ago
Alternatives and similar repositories for ComplexFuncBench
Users that are interested in ComplexFuncBench are comparing it to the libraries listed below
Sorting:
- Official repository for paper "ReasonIR Training Retrievers for Reasoning Tasks".☆203Updated 3 months ago
- LongEmbed: Extending Embedding Models for Long Context Retrieval (EMNLP 2024)☆144Updated 11 months ago
- ☆128Updated last year
- 🚢 Data Toolkit for Sailor Language Models☆94Updated 7 months ago
- Benchmarking LLMs with Challenging Tasks from Real Users☆242Updated 11 months ago
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆240Updated 11 months ago
- LOFT: A 1 Million+ Token Long-Context Benchmark☆218Updated 4 months ago
- The official evaluation suite and dynamic data release for MixEval.☆250Updated 11 months ago
- Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".☆216Updated 2 months ago
- Code repo for "Agent Instructs Large Language Models to be General Zero-Shot Reasoners"☆116Updated this week
- Evaluating LLMs with fewer examples☆163Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆204Updated last year
- Verifiers for LLM Reinforcement Learning☆76Updated 6 months ago
- ☆83Updated this week
- [NeurIPS 2024] Spider2-V: How Far Are Multimodal Agents From Automating Data Science and Engineering Workflows?☆131Updated last year
- Codebase accompanying the Summary of a Haystack paper.☆79Updated last year
- The first dense retrieval model that can be prompted like an LM☆89Updated 5 months ago
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆148Updated 11 months ago
- The official repo for "LLoCo: Learning Long Contexts Offline"☆117Updated last year
- BrowseComp-Plus: A More Fair and Transparent Evaluation Benchmark of Deep-Research Agent☆94Updated 3 weeks ago
- ☆102Updated 11 months ago
- Reproducible, flexible LLM evaluations☆256Updated last week
- Code for the EMNLP 2024 paper "Detecting and Mitigating Contextual Hallucinations in Large Language Models Using Only Attention Maps"☆135Updated last week
- The HELMET Benchmark☆177Updated 2 months ago
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆150Updated last year
- This is the official repository for Inheritune.☆115Updated 8 months ago
- BABILong is a benchmark for LLM evaluation using the needle-in-a-haystack approach.☆215Updated last month
- This is the repository for our paper "INTERS: Unlocking the Power of Large Language Models in Search with Instruction Tuning"☆204Updated 10 months ago
- [ICLR 2025] BRIGHT: A Realistic and Challenging Benchmark for Reasoning-Intensive Retrieval☆168Updated last month
- ☆155Updated last year