lmarena / arena-hard-auto
Arena-Hard-Auto: An automatic LLM benchmark.
☆745Updated last month
Alternatives and similar repositories for arena-hard-auto:
Users that are interested in arena-hard-auto are comparing it to the libraries listed below
- Official repository for ICLR 2025 paper "Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing". Your efficient an…☆631Updated last week
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆1,180Updated this week
- ☆1,006Updated 2 months ago
- Recipes to scale inference-time compute of open models☆1,009Updated last month
- Large Reasoning Models☆801Updated 2 months ago
- Automatically evaluate your LLMs in Google Colab☆592Updated 9 months ago
- Automatic evals for LLMs☆281Updated this week
- Synthetic data curation for post-training and structured data extraction☆836Updated this week
- ☆893Updated 3 weeks ago
- An Open Large Reasoning Model for Real-World Solutions☆1,449Updated 2 months ago
- ☆496Updated 3 months ago
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"☆325Updated 3 weeks ago
- LiveBench: A Challenging, Contamination-Free LLM Benchmark☆523Updated last week
- Official repo for the paper "Scaling Synthetic Data Creation with 1,000,000,000 Personas"☆1,020Updated this week
- Scalable toolkit for efficient model alignment☆722Updated this week
- RewardBench: the first evaluation tool for reward models.☆508Updated this week
- Code for Quiet-STaR☆713Updated 6 months ago
- A library for easily merging multiple LLM experts, and efficiently train the merged LLM.☆442Updated 5 months ago
- OLMoE: Open Mixture-of-Experts Language Models☆613Updated 2 months ago
- Evaluate your LLM's response with Prometheus and GPT4 💯☆873Updated last month
- An Open Source Toolkit For LLM Distillation☆499Updated last month
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆700Updated 4 months ago
- ☆1,334Updated 3 months ago
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆1,715Updated 6 months ago
- MLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineering☆618Updated last month
- Official repository for ORPO☆438Updated 8 months ago
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆821Updated this week
- This repo contains the source code for RULER: What’s the Real Context Size of Your Long-Context Language Models?☆938Updated 3 weeks ago