terryyz / llm-benchmark
A list of LLM benchmark frameworks.
☆66Updated last year
Alternatives and similar repositories for llm-benchmark:
Users that are interested in llm-benchmark are comparing it to the libraries listed below
- Benchmarking LLMs with Challenging Tasks from Real Users☆221Updated 6 months ago
- Codebase accompanying the Summary of a Haystack paper.☆77Updated 7 months ago
- ☆75Updated last year
- Benchmark baseline for retrieval qa applications☆109Updated last year
- The official repo for "LLoCo: Learning Long Contexts Offline"☆116Updated 10 months ago
- Repository for organizing datasets and papers used in Open LLM.☆95Updated last year
- Code repo for "Agent Instructs Large Language Models to be General Zero-Shot Reasoners"☆107Updated 7 months ago
- Open Implementations of LLM Analyses☆102Updated 7 months ago
- The code for the paper ROUTERBENCH: A Benchmark for Multi-LLM Routing System☆118Updated 10 months ago
- Data preparation code for Amber 7B LLM☆89Updated 11 months ago
- Self-Reflection in LLM Agents: Effects on Problem-Solving Performance☆69Updated 5 months ago
- ☆166Updated 8 months ago
- RepoQA: Evaluating Long-Context Code Understanding☆108Updated 6 months ago
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated last year
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]☆105Updated 2 months ago
- Reformatted Alignment☆115Updated 7 months ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆121Updated last year
- Experiments on speculative sampling with Llama models☆125Updated last year
- LongEmbed: Extending Embedding Models for Long Context Retrieval (EMNLP 2024)☆134Updated 5 months ago
- Meta-CoT: Generalizable Chain-of-Thought Prompting in Mixed-task Scenarios with Large Language Models☆96Updated last year
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆139Updated 6 months ago
- [ICLR 2024] Skeleton-of-Thought: Prompting LLMs for Efficient Parallel Generation☆167Updated last year
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆136Updated 6 months ago
- Mixing Language Models with Self-Verification and Meta-Verification☆104Updated 4 months ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆143Updated 7 months ago
- Small and Efficient Mathematical Reasoning LLMs☆71Updated last year
- Evaluating LLMs with fewer examples☆151Updated last year
- Simple implementation of Speculative Sampling in NumPy for GPT-2.☆95Updated last year
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆77Updated last year
- The code and data for "MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding Benchmark" [NeurIPS 2024]☆237Updated 2 months ago