The official evaluation suite and dynamic data release for MixEval.
☆255Nov 10, 2024Updated last year
Alternatives and similar repositories for MixEval
Users that are interested in MixEval are comparing it to the libraries listed below
Sorting:
- Benchmarking LLMs with Challenging Tasks from Real Users☆246Nov 3, 2024Updated last year
- Arena-Hard-Auto: An automatic LLM benchmark.☆1,006Jun 21, 2025Updated 8 months ago
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆149Oct 27, 2024Updated last year
- Automatically evaluate your LLMs in Google Colab☆687May 7, 2024Updated last year
- ☆44Jun 19, 2024Updated last year
- [ICML 2025] Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale☆266Jul 8, 2025Updated 8 months ago
- ☆56Nov 6, 2024Updated last year
- The Universe of Evaluation. All about the evaluation for LLMs.☆232Jul 9, 2024Updated last year
- RewardBench: the first evaluation tool for reward models.☆702Feb 16, 2026Updated 3 weeks ago
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆946Feb 16, 2025Updated last year
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆589Dec 9, 2024Updated last year
- Tools for merging pretrained large language models.☆6,842Feb 28, 2026Updated last week
- [NeurIPS 2024] OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI☆107Mar 6, 2025Updated last year
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆2,324Updated this week
- ☆130Oct 1, 2024Updated last year
- A simple unified framework for evaluating LLMs☆264Apr 14, 2025Updated 10 months ago
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆833Mar 17, 2025Updated 11 months ago
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,953Aug 9, 2025Updated 7 months ago
- Robust recipes to align language models with human and AI preferences☆5,510Sep 8, 2025Updated 6 months ago
- ☆320Sep 18, 2024Updated last year
- ☆313Jun 9, 2024Updated last year
- Data and tools for generating and inspecting OLMo pre-training data.☆1,434Nov 5, 2025Updated 4 months ago
- Evaluate your LLM's response with Prometheus and GPT4 💯☆1,050Apr 25, 2025Updated 10 months ago
- [Findings of EMNLP22] From Mimicking to Integrating: Knowledge Integration for Pre-Trained Language Models☆19Mar 16, 2023Updated 2 years ago
- Evaluation suite for LLMs☆379Jul 11, 2025Updated 7 months ago
- A recipe for online RLHF and online iterative DPO.☆542Dec 28, 2024Updated last year
- Official repository for ORPO☆472May 31, 2024Updated last year
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,664Mar 8, 2024Updated 2 years ago
- Minimalistic large language model 3D-parallelism training☆2,588Feb 19, 2026Updated 2 weeks ago
- A framework for few-shot evaluation of language models.☆11,618Updated this week
- ☆112Nov 7, 2024Updated last year
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,915Updated this week
- ☆325Jul 25, 2024Updated last year
- Recipes to train reward model for RLHF.☆1,518Apr 24, 2025Updated 10 months ago
- BERT score for text generation☆12Jan 15, 2025Updated last year
- ☆20Nov 4, 2025Updated 4 months ago
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆252Oct 30, 2024Updated last year
- ☆138Aug 19, 2024Updated last year
- AllenAI's post-training codebase☆3,614Updated this week