lmarena / arena-hard-autoLinks
Arena-Hard-Auto: An automatic LLM benchmark.
☆870Updated 3 weeks ago
Alternatives and similar repositories for arena-hard-auto
Users that are interested in arena-hard-auto are comparing it to the libraries listed below
Sorting:
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"☆590Updated last week
- ☆524Updated 7 months ago
- Code for Quiet-STaR☆735Updated 10 months ago
- Automatic evals for LLMs☆467Updated 2 weeks ago
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆730Updated 3 months ago
- A project to improve skills of large language models☆458Updated this week
- RewardBench: the first evaluation tool for reward models.☆612Updated last month
- GPQA: A Graduate-Level Google-Proof Q&A Benchmark☆370Updated 9 months ago
- ☆1,025Updated 7 months ago
- An Open Source Toolkit For LLM Distillation☆678Updated last week
- ☆824Updated 2 weeks ago
- Official repository for ORPO☆458Updated last year
- OLMoE: Open Mixture-of-Experts Language Models☆809Updated 4 months ago
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆306Updated last year
- ☆949Updated 5 months ago
- Official codebase for "SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution"☆566Updated 4 months ago
- Large Reasoning Models☆805Updated 7 months ago
- Recipes to scale inference-time compute of open models☆1,106Updated last month
- [ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning☆357Updated 10 months ago
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,800Updated 6 months ago
- ☆1,356Updated 7 months ago
- LiveBench: A Challenging, Contamination-Free LLM Benchmark☆823Updated this week
- FuseAI Project☆578Updated 5 months ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆735Updated 9 months ago
- MLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineering☆800Updated 3 weeks ago
- The code and data for "MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding Benchmark" [NeurIPS 2024]☆259Updated 4 months ago
- ☆585Updated 3 months ago
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]☆498Updated 2 months ago
- A library for easily merging multiple LLM experts, and efficiently train the merged LLM.☆485Updated 10 months ago
- An Open Large Reasoning Model for Real-World Solutions☆1,508Updated last month