lmarena / arena-hard-autoLinks
Arena-Hard-Auto: An automatic LLM benchmark.
☆940Updated 3 months ago
Alternatives and similar repositories for arena-hard-auto
Users that are interested in arena-hard-auto are comparing it to the libraries listed below
Sorting:
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆778Updated 7 months ago
- FuseAI Project☆584Updated 8 months ago
- ☆543Updated 11 months ago
- Recipes to scale inference-time compute of open models☆1,109Updated 4 months ago
- Code for Quiet-STaR☆739Updated last year
- The code and data for "MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding Benchmark" [NeurIPS 2024]☆296Updated 7 months ago
- RewardBench: the first evaluation tool for reward models.☆642Updated 4 months ago
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"☆680Updated 3 months ago
- Automatic evals for LLMs☆543Updated 3 months ago
- ☆962Updated 8 months ago
- ☆1,034Updated 10 months ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆747Updated last year
- An Open Large Reasoning Model for Real-World Solutions☆1,522Updated 4 months ago
- Large Reasoning Models☆805Updated 10 months ago
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,877Updated 2 months ago
- Official repository for ORPO☆463Updated last year
- ☆1,350Updated 10 months ago
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆923Updated 8 months ago
- A project to improve skills of large language models☆581Updated last week
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆2,009Updated this week
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆660Updated last year
- YaRN: Efficient Context Window Extension of Large Language Models☆1,615Updated last year
- ☆963Updated 3 months ago
- OLMoE: Open Mixture-of-Experts Language Models☆886Updated 3 weeks ago
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆548Updated last year
- LiveBench: A Challenging, Contamination-Free LLM Benchmark☆890Updated last week
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆889Updated 2 weeks ago
- An Open Source Toolkit For LLM Distillation☆740Updated 3 months ago
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆2,056Updated last year
- GPQA: A Graduate-Level Google-Proof Q&A Benchmark☆417Updated last year