terryyz / llm-benchmarkLinks
A list of LLM benchmark frameworks.
☆73Updated last year
Alternatives and similar repositories for llm-benchmark
Users that are interested in llm-benchmark are comparing it to the libraries listed below
Sorting:
- The official evaluation suite and dynamic data release for MixEval.☆253Updated last year
- [ICLR 2024] Skeleton-of-Thought: Prompting LLMs for Efficient Parallel Generation☆182Updated last year
- Benchmarking LLMs with Challenging Tasks from Real Users☆246Updated last year
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆245Updated last year
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆141Updated 2 years ago
- Benchmark baseline for retrieval qa applications☆118Updated last year
- Open Implementations of LLM Analyses☆108Updated last year
- 🔧 Compare how Agent systems perform on several benchmarks. 📊🚀☆102Updated 4 months ago
- LongEmbed: Extending Embedding Models for Long Context Retrieval (EMNLP 2024)☆145Updated last year
- Astraios: Parameter-Efficient Instruction Tuning Code Language Models☆63Updated last year
- ☆101Updated last year
- awesome llm plaza: daily tracking all sorts of awesome topics of llm, e.g. llm for coding, robotics, reasoning, multimod etc.☆212Updated last month
- ☆78Updated last year
- Complex Function Calling Benchmark.☆157Updated 11 months ago
- The official repo for "LLoCo: Learning Long Contexts Offline"☆118Updated last year
- The code for the paper ROUTERBENCH: A Benchmark for Multi-LLM Routing System☆153Updated last year
- Accelerating your LLM training to full speed! Made with ❤️ by ServiceNow Research☆272Updated this week
- Official repo for "Make Your LLM Fully Utilize the Context"☆261Updated last year
- [ACL 2024] LLM2LLM: Boosting LLMs with Novel Iterative Data Enhancement☆192Updated last year
- Model, Code & Data for the EMNLP'23 paper "Making Large Language Models Better Data Creators"☆137Updated 2 years ago
- Self-Reflection in LLM Agents: Effects on Problem-Solving Performance☆92Updated last year
- CodeSage: Code Representation Learning At Scale (ICLR 2024)☆114Updated last year
- ToolBench, an evaluation suite for LLM tool manipulation capabilities.☆167Updated last year
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆315Updated 2 years ago
- Evaluating LLMs with CommonGen-Lite☆93Updated last year
- This is the repo for the paper Shepherd -- A Critic for Language Model Generation☆220Updated 2 years ago
- Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.☆78Updated last year
- ☆43Updated last year
- Code repo for "Agent Instructs Large Language Models to be General Zero-Shot Reasoners"☆118Updated 2 months ago
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆152Updated last year