lmarena / arena-hard-auto
Arena-Hard-Auto: An automatic LLM benchmark.
☆823Updated 2 weeks ago
Alternatives and similar repositories for arena-hard-auto
Users that are interested in arena-hard-auto are comparing it to the libraries listed below
Sorting:
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"☆467Updated 2 weeks ago
- ☆515Updated 5 months ago
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆699Updated last month
- Automatic evals for LLMs☆388Updated this week
- ☆931Updated 3 months ago
- ☆1,019Updated 4 months ago
- Code for Quiet-STaR☆731Updated 8 months ago
- ☆527Updated last month
- Official repository for ORPO☆452Updated 11 months ago
- Large Reasoning Models☆805Updated 5 months ago
- RewardBench: the first evaluation tool for reward models.☆566Updated last week
- ☆691Updated 2 weeks ago
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆1,516Updated last week
- Official codebase for "SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution"☆517Updated last month
- [ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning☆353Updated 8 months ago
- A project to improve skills of large language models☆383Updated this week
- Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them☆488Updated 10 months ago
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]☆455Updated last week
- An Open Source Toolkit For LLM Distillation☆596Updated 2 weeks ago
- The official evaluation suite and dynamic data release for MixEval.☆239Updated 6 months ago
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,740Updated 4 months ago
- Benchmarking LLMs with Challenging Tasks from Real Users☆221Updated 6 months ago
- An Open Large Reasoning Model for Real-World Solutions☆1,488Updated 2 months ago
- The code and data for "MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding Benchmark" [NeurIPS 2024]☆242Updated 2 months ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆461Updated last year
- Recipes to scale inference-time compute of open models☆1,071Updated last week
- A framework for the evaluation of autoregressive code generation language models.☆943Updated 6 months ago
- A simple unified framework for evaluating LLMs☆211Updated last month
- ☆1,356Updated 5 months ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆723Updated 7 months ago