lmarena / arena-hard-auto
Arena-Hard-Auto: An automatic LLM benchmark.
☆781Updated this week
Alternatives and similar repositories for arena-hard-auto:
Users that are interested in arena-hard-auto are comparing it to the libraries listed below
- ☆1,015Updated 4 months ago
- Automatic evals for LLMs☆370Updated this week
- ☆512Updated 5 months ago
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆676Updated last month
- ☆920Updated 2 months ago
- Official codebase for "SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution"☆503Updated last month
- ☆630Updated 3 weeks ago
- Recipes to scale inference-time compute of open models☆1,055Updated last month
- An Open Large Reasoning Model for Real-World Solutions☆1,484Updated last month
- Code for Quiet-STaR☆730Updated 8 months ago
- ☆518Updated last week
- RewardBench: the first evaluation tool for reward models.☆555Updated last month
- Official repository for ORPO☆448Updated 10 months ago
- Large Reasoning Models☆802Updated 4 months ago
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,718Updated 3 months ago
- Automatically evaluate your LLMs in Google Colab☆615Updated 11 months ago
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"☆440Updated this week
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆873Updated 2 months ago
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆1,438Updated this week
- Training Large Language Model to Reason in a Continuous Latent Space☆1,062Updated 2 months ago
- Verifiers for LLM Reinforcement Learning☆813Updated 3 weeks ago
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym☆438Updated 3 weeks ago
- The official evaluation suite and dynamic data release for MixEval.☆235Updated 5 months ago
- ☆1,355Updated 5 months ago
- A simple unified framework for evaluating LLMs☆209Updated last week
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆299Updated last year
- LiveBench: A Challenging, Contamination-Free LLM Benchmark☆679Updated this week
- Synthetic data curation for post-training and structured data extraction☆1,230Updated this week
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆650Updated 10 months ago
- Building Open LLM Web Agents with Self-Evolving Online Curriculum RL☆361Updated 2 weeks ago