terryyz / llm-benchmarkLinks
A list of LLM benchmark frameworks.
☆73Updated last year
Alternatives and similar repositories for llm-benchmark
Users that are interested in llm-benchmark are comparing it to the libraries listed below
Sorting:
- [ICLR 2024] Skeleton-of-Thought: Prompting LLMs for Efficient Parallel Generation☆184Updated last year
- The official evaluation suite and dynamic data release for MixEval.☆254Updated last year
- Benchmark baseline for retrieval qa applications☆119Updated last year
- Code repo for "Agent Instructs Large Language Models to be General Zero-Shot Reasoners"☆120Updated 3 months ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆143Updated 2 years ago
- Open Implementations of LLM Analyses☆107Updated last year
- 🔧 Compare how Agent systems perform on several benchmarks. 📊🚀☆103Updated 5 months ago
- The official repo for "LLoCo: Learning Long Contexts Offline"☆118Updated last year
- Accelerating your LLM training to full speed! Made with ❤️ by ServiceNow Research☆282Updated this week
- Self-Reflection in LLM Agents: Effects on Problem-Solving Performance☆93Updated last year
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆316Updated 2 years ago
- Model, Code & Data for the EMNLP'23 paper "Making Large Language Models Better Data Creators"☆137Updated 2 years ago
- Benchmarking LLMs with Challenging Tasks from Real Users☆245Updated last year
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆249Updated last year
- Evaluating LLMs with fewer examples☆169Updated last year
- LongEmbed: Extending Embedding Models for Long Context Retrieval (EMNLP 2024)☆145Updated last year
- [ACL 2024] LLM2LLM: Boosting LLMs with Novel Iterative Data Enhancement☆193Updated last year
- Data preparation code for Amber 7B LLM☆94Updated last year
- A pipeline for LLM knowledge distillation☆112Updated 9 months ago
- ToolBench, an evaluation suite for LLM tool manipulation capabilities.☆172Updated last year
- The code for the paper ROUTERBENCH: A Benchmark for Multi-LLM Routing System☆153Updated last year
- Codebase accompanying the Summary of a Haystack paper.☆80Updated last year
- Complex Function Calling Benchmark.☆163Updated last year
- ☆102Updated last year
- Official implementation of paper "On the Diagram of Thought" (https://arxiv.org/abs/2409.10038)☆191Updated last week
- [NeurIPS 2023] This is the code for the paper `Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias`.☆156Updated 2 years ago
- ☆84Updated 2 years ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆279Updated last year
- ☆78Updated 2 years ago
- ☆278Updated 2 years ago