terryyz / llm-benchmarkLinks
A list of LLM benchmark frameworks.
☆72Updated last year
Alternatives and similar repositories for llm-benchmark
Users that are interested in llm-benchmark are comparing it to the libraries listed below
Sorting:
- Benchmarking LLMs with Challenging Tasks from Real Users☆246Updated last year
- The code for the paper ROUTERBENCH: A Benchmark for Multi-LLM Routing System☆150Updated last year
- [ICLR 2024] Skeleton-of-Thought: Prompting LLMs for Efficient Parallel Generation☆180Updated last year
- The official evaluation suite and dynamic data release for MixEval.☆252Updated last year
- Evaluating LLMs with CommonGen-Lite☆91Updated last year
- Open Implementations of LLM Analyses☆107Updated last year
- LongEmbed: Extending Embedding Models for Long Context Retrieval (EMNLP 2024)☆145Updated last year
- xCodeEval: A Large Scale Multilingual Multitask Benchmark for Code Understanding, Generation, Translation and Retrieval☆87Updated last year
- Astraios: Parameter-Efficient Instruction Tuning Code Language Models☆63Updated last year
- Evaluating LLMs with fewer examples☆168Updated last year
- Code repo for "Agent Instructs Large Language Models to be General Zero-Shot Reasoners"☆116Updated last month
- RepoQA: Evaluating Long-Context Code Understanding☆123Updated last year
- ToolBench, an evaluation suite for LLM tool manipulation capabilities.☆164Updated last year
- Codebase accompanying the Summary of a Haystack paper.☆79Updated last year
- [ACL 2024] LLM2LLM: Boosting LLMs with Novel Iterative Data Enhancement☆191Updated last year
- Model, Code & Data for the EMNLP'23 paper "Making Large Language Models Better Data Creators"☆135Updated 2 years ago
- ☆43Updated last year
- The official repo for "LLoCo: Learning Long Contexts Offline"☆118Updated last year
- ☆78Updated last year
- Data preparation code for Amber 7B LLM☆93Updated last year
- [ICML '24] R2E: Turn any GitHub Repository into a Programming Agent Environment☆134Updated 7 months ago
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆157Updated last year
- Official implementation of paper "On the Diagram of Thought" (https://arxiv.org/abs/2409.10038)☆188Updated 2 months ago
- ☆85Updated 2 years ago
- 🔧 Compare how Agent systems perform on several benchmarks. 📊🚀☆102Updated 3 months ago
- ☆241Updated last year
- evol augment any dataset online☆61Updated 2 years ago
- ☆129Updated last year
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆312Updated last year
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆189Updated 8 months ago