Arena-Hard-Auto: An automatic LLM benchmark.
☆1,006Jun 21, 2025Updated 8 months ago
Alternatives and similar repositories for arena-hard-auto
Users that are interested in arena-hard-auto are comparing it to the libraries listed below
Sorting:
- Benchmarking LLMs with Challenging Tasks from Real Users☆246Nov 3, 2024Updated last year
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,953Aug 9, 2025Updated 7 months ago
- RewardBench: the first evaluation tool for reward models.☆697Feb 16, 2026Updated 3 weeks ago
- The official evaluation suite and dynamic data release for MixEval.☆255Nov 10, 2024Updated last year
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆946Feb 16, 2025Updated last year
- A framework for few-shot evaluation of language models.☆11,540Mar 2, 2026Updated last week
- AllenAI's post-training codebase☆3,605Updated this week
- ☆4,368Jul 31, 2025Updated 7 months ago
- ☆1,107Jan 10, 2026Updated last month
- Tools for merging pretrained large language models.☆6,826Feb 28, 2026Updated last week
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆2,194Aug 17, 2024Updated last year
- Robust recipes to align language models with human and AI preferences☆5,510Sep 8, 2025Updated 6 months ago
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & TIS & vLLM & Ray & Async RL)☆9,084Updated this week
- A benchmark for emotional intelligence in large language models☆405Jul 26, 2024Updated last year
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆833Mar 17, 2025Updated 11 months ago
- Scalable toolkit for efficient model alignment☆849Oct 6, 2025Updated 5 months ago
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆2,324Updated this week
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"☆804Jul 16, 2025Updated 7 months ago
- ☆111Nov 7, 2024Updated last year
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,663Mar 8, 2024Updated 2 years ago
- A simple unified framework for evaluating LLMs☆264Apr 14, 2025Updated 10 months ago
- Official repository for ORPO☆472May 31, 2024Updated last year
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆3,114Mar 2, 2026Updated last week
- Evaluate your LLM's response with Prometheus and GPT4 💯☆1,050Apr 25, 2025Updated 10 months ago
- Minimalistic large language model 3D-parallelism training☆2,588Feb 19, 2026Updated 2 weeks ago
- LiveBench: A Challenging, Contamination-Free LLM Benchmark☆1,075Updated this week
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆906Sep 30, 2025Updated 5 months ago
- OpenCompass is an LLM evaluation platform, supporting a wide range of models (Llama3, Mistral, InternLM2,GPT-4,LLaMa2, Qwen,GLM, Claude, …☆6,705Feb 27, 2026Updated last week
- Train transformer language models with reinforcement learning.☆17,523Updated this week
- ☆565Nov 20, 2024Updated last year
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆316Dec 20, 2023Updated 2 years ago
- Recipes to train reward model for RLHF.☆1,517Apr 24, 2025Updated 10 months ago
- [ICLR 2024 Spotlight] FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets☆217Dec 24, 2023Updated 2 years ago
- The official implementation of Self-Play Fine-Tuning (SPIN)☆1,235May 8, 2024Updated last year
- ☆62May 13, 2025Updated 9 months ago
- LongBench v2 and LongBench (ACL 25'&24')☆1,101Jan 15, 2025Updated last year
- A series of technical report on Slow Thinking with LLM☆761Aug 13, 2025Updated 6 months ago
- verl: Volcano Engine Reinforcement Learning for LLMs☆19,519Mar 2, 2026Updated last week
- 800,000 step-level correctness labels on LLM solutions to MATH problems☆2,096Jun 1, 2023Updated 2 years ago