terryyz / llm-benchmarkLinks
A list of LLM benchmark frameworks.
☆71Updated last year
Alternatives and similar repositories for llm-benchmark
Users that are interested in llm-benchmark are comparing it to the libraries listed below
Sorting:
- Evaluating LLMs with fewer examples☆163Updated last year
- The code for the paper ROUTERBENCH: A Benchmark for Multi-LLM Routing System☆143Updated last year
- The official evaluation suite and dynamic data release for MixEval.☆250Updated 11 months ago
- [ICLR 2024] Skeleton-of-Thought: Prompting LLMs for Efficient Parallel Generation☆176Updated last year
- Codebase accompanying the Summary of a Haystack paper.☆79Updated last year
- Open Implementations of LLM Analyses☆107Updated last year
- Benchmarking LLMs with Challenging Tasks from Real Users☆241Updated 11 months ago
- Evaluating LLMs with CommonGen-Lite☆91Updated last year
- The official repo for "LLoCo: Learning Long Contexts Offline"☆117Updated last year
- Astraios: Parameter-Efficient Instruction Tuning Code Language Models☆62Updated last year
- LongEmbed: Extending Embedding Models for Long Context Retrieval (EMNLP 2024)☆144Updated 11 months ago
- Benchmark baseline for retrieval qa applications☆115Updated last year
- Code repo for "Agent Instructs Large Language Models to be General Zero-Shot Reasoners"☆115Updated last year
- RepoQA: Evaluating Long-Context Code Understanding☆117Updated 11 months ago
- Official repo for "Make Your LLM Fully Utilize the Context"☆259Updated last year
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆311Updated last year
- Data preparation code for Amber 7B LLM☆92Updated last year
- ☆77Updated last year
- 🚢 Data Toolkit for Sailor Language Models☆94Updated 7 months ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆138Updated 2 years ago
- xCodeEval: A Large Scale Multilingual Multitask Benchmark for Code Understanding, Generation, Translation and Retrieval☆86Updated last year
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆80Updated last year
- Complex Function Calling Benchmark.☆136Updated 8 months ago
- The code for the paper: "Same Task, More Tokens: the Impact of Input Length on the Reasoning Performance of Large Language Models"☆54Updated last year
- ☆97Updated last year
- Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.☆78Updated 11 months ago
- Code accompanying "How I learned to start worrying about prompt formatting".☆111Updated 4 months ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆205Updated last year
- This is the repo for the paper Shepherd -- A Critic for Language Model Generation☆217Updated 2 years ago
- ToolBench, an evaluation suite for LLM tool manipulation capabilities.☆163Updated last year