Benchmarking LLMs with Challenging Tasks from Real Users
☆247Nov 3, 2024Updated last year
Alternatives and similar repositories for WildBench
Users that are interested in WildBench are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Arena-Hard-Auto: An automatic LLM benchmark.☆1,008Jun 21, 2025Updated 9 months ago
- A simple unified framework for evaluating LLMs☆266Apr 14, 2025Updated 11 months ago
- The official evaluation suite and dynamic data release for MixEval.☆255Nov 10, 2024Updated last year
- ☆17Oct 22, 2024Updated last year
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,961Aug 9, 2025Updated 7 months ago
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆137Jul 8, 2024Updated last year
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆948Feb 16, 2025Updated last year
- [NeurIPS 2024] OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI☆107Mar 6, 2025Updated last year
- RewardBench: the first evaluation tool for reward models.☆704Feb 16, 2026Updated last month
- AllenAI's post-training codebase☆3,629Mar 16, 2026Updated last week
- [ICLR 2024 Spotlight] FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets☆217Dec 24, 2023Updated 2 years ago
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆552Mar 10, 2024Updated 2 years ago
- A framework for few-shot evaluation of language models.☆11,704Mar 5, 2026Updated 2 weeks ago
- Logic grid puzzle ("zebra puzzle") generator and solver☆30Mar 1, 2024Updated 2 years ago
- evolve llm training instruction, from english instruction to any language.☆120Sep 15, 2023Updated 2 years ago
- ☆11Sep 19, 2025Updated 6 months ago
- [ACL 2025 Main] Official Repository for "Evaluating Language Models as Synthetic Data Generators"☆41Dec 13, 2024Updated last year
- Evaluating LLMs with CommonGen-Lite☆95Mar 21, 2024Updated 2 years ago
- Generative Judge for Evaluating Alignment☆248Jan 18, 2024Updated 2 years ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆591Dec 9, 2024Updated last year
- Tools for merging pretrained large language models.☆6,867Mar 15, 2026Updated last week
- ☆63May 13, 2025Updated 10 months ago
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆833Mar 17, 2025Updated last year
- [NeurIPS 2025] Reasoning Models Better Express Their Confidence"☆22Nov 19, 2025Updated 4 months ago
- Robust recipes to align language models with human and AI preferences☆5,527Sep 8, 2025Updated 6 months ago
- An Analytical Evaluation Board of Multi-turn LLM Agents [NeurIPS 2024 Oral]☆401May 20, 2024Updated last year
- Evaluate your LLM's response with Prometheus and GPT4 💯☆1,057Apr 25, 2025Updated 10 months ago
- Evaluating LLMs with fewer examples☆171Apr 12, 2024Updated last year
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆2,206Aug 17, 2024Updated last year
- Automatically evaluate your LLMs in Google Colab☆687May 7, 2024Updated last year
- Holistic Evaluation of Language Models (HELM) is an open source Python framework created by the Center for Research on Foundation Models …☆2,718Updated this week
- ☆37May 7, 2023Updated 2 years ago
- ☆31Jun 12, 2024Updated last year
- ☆4,406Jul 31, 2025Updated 7 months ago
- An original implementation of the paper "CREPE: Open-Domain Question Answering with False Presuppositions"☆16Nov 5, 2024Updated last year
- BERT score for text generation