lmarena / arena-hard-autoLinks
Arena-Hard-Auto: An automatic LLM benchmark.
☆851Updated this week
Alternatives and similar repositories for arena-hard-auto
Users that are interested in arena-hard-auto are comparing it to the libraries listed below
Sorting:
- ☆1,356Updated 7 months ago
- An Open Large Reasoning Model for Real-World Solutions☆1,498Updated 3 weeks ago
- FuseAI Project☆576Updated 5 months ago
- OLMoE: Open Mixture-of-Experts Language Models☆785Updated 3 months ago
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆717Updated 3 months ago
- ☆520Updated 7 months ago
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆1,641Updated this week
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆899Updated 4 months ago
- Large Reasoning Models☆804Updated 6 months ago
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"☆554Updated this week
- Official repository for ORPO☆455Updated last year
- Automatic evals for LLMs☆437Updated 2 weeks ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆732Updated 8 months ago
- Code for Quiet-STaR☆734Updated 10 months ago
- ☆1,025Updated 6 months ago
- LIMO: Less is More for Reasoning☆963Updated 2 months ago
- This repo contains the source code for RULER: What’s the Real Context Size of Your Long-Context Language Models?☆1,149Updated 2 weeks ago
- RewardBench: the first evaluation tool for reward models.☆604Updated 2 weeks ago
- The official evaluation suite and dynamic data release for MixEval.☆242Updated 7 months ago
- ☆782Updated last month
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆1,897Updated 10 months ago
- ☆900Updated 9 months ago
- An Open Source Toolkit For LLM Distillation☆651Updated 3 weeks ago
- ☆939Updated 5 months ago
- Official codebase for "SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution"☆551Updated 3 months ago
- A benchmark for emotional intelligence in large language models☆306Updated 11 months ago
- Recipes to scale inference-time compute of open models☆1,097Updated last month
- Reproducible, flexible LLM evaluations☆214Updated last month
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆2,773Updated this week
- Synthetic data curation for post-training and structured data extraction☆1,414Updated last week