Benchmarking LLMs with Challenging Tasks from Real Users
☆246Nov 3, 2024Updated last year
Alternatives and similar repositories for WildBench
Users that are interested in WildBench are comparing it to the libraries listed below
Sorting:
- Arena-Hard-Auto: An automatic LLM benchmark.☆1,003Jun 21, 2025Updated 8 months ago
- A simple unified framework for evaluating LLMs☆264Apr 14, 2025Updated 10 months ago
- The official evaluation suite and dynamic data release for MixEval.☆255Nov 10, 2024Updated last year
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,953Aug 9, 2025Updated 6 months ago
- ☆17Oct 22, 2024Updated last year
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆946Feb 16, 2025Updated last year
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆137Jul 8, 2024Updated last year
- RewardBench: the first evaluation tool for reward models.☆696Feb 16, 2026Updated 2 weeks ago
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆552Mar 10, 2024Updated last year
- AllenAI's post-training codebase☆3,592Feb 24, 2026Updated last week
- Evaluating LLMs with fewer examples☆169Apr 12, 2024Updated last year
- [ICLR 2024 Spotlight] FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets☆217Dec 24, 2023Updated 2 years ago
- A framework for few-shot evaluation of language models.☆11,478Feb 15, 2026Updated 2 weeks ago
- [NeurIPS 2024] OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI☆107Mar 6, 2025Updated 11 months ago
- Tools for merging pretrained large language models.☆6,814Jan 26, 2026Updated last month
- evolve llm training instruction, from english instruction to any language.☆119Sep 15, 2023Updated 2 years ago
- Evaluate your LLM's response with Prometheus and GPT4 💯☆1,046Apr 25, 2025Updated 10 months ago
- Simple and efficient pytorch-native transformer training and inference (batched)☆79Apr 2, 2024Updated last year
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆316Dec 20, 2023Updated 2 years ago
- BERT score for text generation☆12Jan 15, 2025Updated last year
- An Analytical Evaluation Board of Multi-turn LLM Agents [NeurIPS 2024 Oral]☆396May 20, 2024Updated last year
- Holistic Evaluation of Language Models (HELM) is an open source Python framework created by the Center for Research on Foundation Models …☆2,693Updated this week
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆588Dec 9, 2024Updated last year
- Robust recipes to align language models with human and AI preferences☆5,506Sep 8, 2025Updated 5 months ago
- Scalable toolkit for efficient model alignment☆851Oct 6, 2025Updated 4 months ago
- ☆20Jul 24, 2024Updated last year
- ☆37May 7, 2023Updated 2 years ago
- Evaluating LLMs with CommonGen-Lite☆95Mar 21, 2024Updated last year
- Source code for paper Are Human-generated Demonstrations Necessary for In-context Learning☆12Jan 21, 2024Updated 2 years ago
- ☆11Sep 19, 2025Updated 5 months ago
- Automatically evaluate your LLMs in Google Colab☆686May 7, 2024Updated last year
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆2,190Aug 17, 2024Updated last year
- [ACL 2025 Main] Official Repository for "Evaluating Language Models as Synthetic Data Generators"☆41Dec 13, 2024Updated last year
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆459Apr 18, 2024Updated last year
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆829Mar 17, 2025Updated 11 months ago
- ☆4,368Jul 31, 2025Updated 7 months ago
- ☆46Jun 24, 2025Updated 8 months ago
- Data and tools for generating and inspecting OLMo pre-training data.☆1,416Nov 5, 2025Updated 3 months ago
- Minimalistic large language model 3D-parallelism training☆2,579Feb 19, 2026Updated last week