terryyz / llm-benchmarkLinks
A list of LLM benchmark frameworks.
☆70Updated last year
Alternatives and similar repositories for llm-benchmark
Users that are interested in llm-benchmark are comparing it to the libraries listed below
Sorting:
- [ICLR 2024] Skeleton-of-Thought: Prompting LLMs for Efficient Parallel Generation☆176Updated last year
- Evaluating LLMs with fewer examples☆161Updated last year
- The code for the paper ROUTERBENCH: A Benchmark for Multi-LLM Routing System☆140Updated last year
- Codebase accompanying the Summary of a Haystack paper.☆79Updated last year
- 🔧 Compare how Agent systems perform on several benchmarks. 📊🚀☆102Updated last month
- Open Implementations of LLM Analyses☆107Updated 11 months ago
- ☆97Updated 11 months ago
- The official repo for "LLoCo: Learning Long Contexts Offline"☆116Updated last year
- ☆208Updated last year
- The official evaluation suite and dynamic data release for MixEval.☆246Updated 10 months ago
- CodeSage: Code Representation Learning At Scale (ICLR 2024)☆112Updated 11 months ago
- LongEmbed: Extending Embedding Models for Long Context Retrieval (EMNLP 2024)☆143Updated 10 months ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆138Updated 2 years ago
- ☆85Updated 2 years ago
- Benchmark baseline for retrieval qa applications☆116Updated last year
- Code accompanying "How I learned to start worrying about prompt formatting".☆111Updated 3 months ago
- ToolBench, an evaluation suite for LLM tool manipulation capabilities.☆161Updated last year
- Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.☆78Updated 11 months ago
- Benchmarking LLMs with Challenging Tasks from Real Users☆241Updated 10 months ago
- Self-Reflection in LLM Agents: Effects on Problem-Solving Performance☆85Updated 10 months ago
- Complex Function Calling Benchmark.☆135Updated 8 months ago
- Official Code Repository for the paper "Distilling LLM Agent into Small Models with Retrieval and Code Tools"☆154Updated last month
- Official repo for "Make Your LLM Fully Utilize the Context"☆254Updated last year
- Code repo for "Agent Instructs Large Language Models to be General Zero-Shot Reasoners"☆116Updated last year
- Evaluating LLMs with CommonGen-Lite☆91Updated last year
- ☆150Updated last year
- RepoQA: Evaluating Long-Context Code Understanding☆117Updated 10 months ago
- ☆77Updated last year
- ☆118Updated 4 months ago
- Official repo of Respond-and-Respond: data, code, and evaluation☆104Updated last year