lmarena / arena-hard-autoLinks
Arena-Hard-Auto: An automatic LLM benchmark.
☆928Updated 3 months ago
Alternatives and similar repositories for arena-hard-auto
Users that are interested in arena-hard-auto are comparing it to the libraries listed below
Sorting:
- Automatic evals for LLMs☆528Updated 3 months ago
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆772Updated 6 months ago
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"☆665Updated 2 months ago
- ☆541Updated 10 months ago
- RewardBench: the first evaluation tool for reward models.☆638Updated 3 months ago
- Code for Quiet-STaR☆740Updated last year
- Recipes to scale inference-time compute of open models☆1,109Updated 4 months ago
- ☆961Updated 8 months ago
- ☆938Updated 2 months ago
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆1,962Updated this week
- ☆1,034Updated 9 months ago
- Large Reasoning Models☆805Updated 9 months ago
- ☆1,352Updated 10 months ago
- Chat Templates for 🤗 HuggingFace Large Language Models☆701Updated 9 months ago
- Official repository for ORPO☆464Updated last year
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆746Updated last year
- An Open Large Reasoning Model for Real-World Solutions☆1,522Updated 3 months ago
- An Open Source Toolkit For LLM Distillation☆729Updated 2 months ago
- OLMoE: Open Mixture-of-Experts Language Models☆872Updated this week
- FuseAI Project☆583Updated 8 months ago
- A project to improve skills of large language models☆564Updated this week
- [COLM 2025] LIMO: Less is More for Reasoning☆1,020Updated last month
- The code and data for "MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding Benchmark" [NeurIPS 2024]☆292Updated 7 months ago
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,866Updated last month
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆922Updated 7 months ago
- LiveBench: A Challenging, Contamination-Free LLM Benchmark☆879Updated this week
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆886Updated 2 months ago
- [NeurIPS'25] Official codebase for "SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution"☆597Updated 6 months ago
- A framework for the evaluation of autoregressive code generation language models.☆981Updated 2 months ago
- Code and Data for Tau-Bench☆854Updated last month