A simple unified framework for evaluating LLMs
☆264Apr 14, 2025Updated 10 months ago
Alternatives and similar repositories for ZeroEval
Users that are interested in ZeroEval are comparing it to the libraries listed below
Sorting:
- Benchmarking LLMs with Challenging Tasks from Real Users☆246Nov 3, 2024Updated last year
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆149Oct 27, 2024Updated last year
- Arena-Hard-Auto: An automatic LLM benchmark.☆1,006Jun 21, 2025Updated 8 months ago
- ☆46Jun 24, 2025Updated 8 months ago
- The official evaluation suite and dynamic data release for MixEval.☆255Nov 10, 2024Updated last year
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆97Nov 17, 2024Updated last year
- [NeurIPS 2024] OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI☆107Mar 6, 2025Updated last year
- ☆44Sep 19, 2024Updated last year
- ☆1,107Jan 10, 2026Updated 2 months ago
- This library supports evaluating disparities in generated image quality, diversity, and consistency between geographic regions.☆20Jun 3, 2024Updated last year
- RewardBench: the first evaluation tool for reward models.☆702Feb 16, 2026Updated 3 weeks ago
- ☆14Dec 1, 2025Updated 3 months ago
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,953Aug 9, 2025Updated 7 months ago
- ☆132May 8, 2025Updated 10 months ago
- [NeurIPS 2024 D&B] Evaluating Copyright Takedown Methods for Language Models☆17Jul 17, 2024Updated last year
- ☆325Jul 25, 2024Updated last year
- MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models☆454Feb 1, 2024Updated 2 years ago
- Scalable toolkit for efficient model alignment☆849Oct 6, 2025Updated 5 months ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆589Dec 9, 2024Updated last year
- ☆4,390Jul 31, 2025Updated 7 months ago
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆946Feb 16, 2025Updated last year
- AllenAI's post-training codebase☆3,614Updated this week
- The official implementation of Self-Play Fine-Tuning (SPIN)☆1,235May 8, 2024Updated last year
- Evaluating LLMs with fewer examples☆169Apr 12, 2024Updated last year
- O1 Replication Journey☆2,000Jan 14, 2025Updated last year
- Reproducing R1 for Code with Reliable Rewards☆12Apr 9, 2025Updated 11 months ago
- Reproducible and flexible LLM evaluations for scientific reasoning.☆26Jul 23, 2025Updated 7 months ago
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- [ICLR'25] BigCodeBench: Benchmarking Code Generation Towards AGI☆485Jan 3, 2026Updated 2 months ago
- A simple GPT-based evaluation tool for multi-aspect, interpretable assessment of LLMs.☆90Jan 29, 2024Updated 2 years ago
- The Paper List on Data Contamination for Large Language Models Evaluation.☆110Jan 29, 2026Updated last month
- Async pipelined version of Verl☆124Apr 8, 2025Updated 11 months ago
- [NeurIPS'24] Official code for *🎯DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving*☆121Dec 10, 2024Updated last year
- ☆116May 7, 2025Updated 10 months ago
- [ACL 2025 Findings] Autonomous Data Selection with Zero-shot Generative Classifiers for Mathematical Texts (As Huggingface Daily Papers: …☆90Nov 23, 2025Updated 3 months ago
- ☆21Apr 2, 2025Updated 11 months ago
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & TIS & vLLM & Ray & Async RL)☆9,084Updated this week
- Robust recipes to align language models with human and AI preferences☆5,510Sep 8, 2025Updated 6 months ago
- Modified Arena-Hard-Auto LLM evaluation toolkit with an emphasis on Russian language☆47Mar 20, 2025Updated 11 months ago