lmarena / arena-hard-autoLinks
Arena-Hard-Auto: An automatic LLM benchmark.
☆963Updated 5 months ago
Alternatives and similar repositories for arena-hard-auto
Users that are interested in arena-hard-auto are comparing it to the libraries listed below
Sorting:
- ☆556Updated last year
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"☆723Updated 4 months ago
- Automatic evals for LLMs☆559Updated 5 months ago
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆793Updated 8 months ago
- Recipes to scale inference-time compute of open models☆1,118Updated 6 months ago
- OLMoE: Open Mixture-of-Experts Language Models☆916Updated 2 months ago
- FuseAI Project☆584Updated 10 months ago
- Code for Quiet-STaR☆742Updated last year
- An Open Source Toolkit For LLM Distillation☆785Updated 4 months ago
- ☆966Updated 10 months ago
- The code and data for "MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding Benchmark" [NeurIPS 2024]☆315Updated last week
- RewardBench: the first evaluation tool for reward models.☆660Updated 5 months ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆750Updated last year
- ☆1,035Updated 11 months ago
- ☆1,348Updated last year
- Large Reasoning Models☆807Updated last year
- [COLM 2025] LIMO: Less is More for Reasoning☆1,053Updated 4 months ago
- ☆1,015Updated 5 months ago
- Official repository for ORPO☆467Updated last year
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆2,141Updated last week
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆931Updated 9 months ago
- An Open Large Reasoning Model for Real-World Solutions