WildEval / ZeroEvalLinks
A simple unified framework for evaluating LLMs
☆261Updated 9 months ago
Alternatives and similar repositories for ZeroEval
Users that are interested in ZeroEval are comparing it to the libraries listed below
Sorting:
- Benchmarking LLMs with Challenging Tasks from Real Users☆245Updated last year
- ☆203Updated 9 months ago
- Reproducible, flexible LLM evaluations☆331Updated this week
- LOFT: A 1 Million+ Token Long-Context Benchmark☆225Updated 7 months ago
- The official evaluation suite and dynamic data release for MixEval.☆254Updated last year
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆235Updated 6 months ago
- The HELMET Benchmark☆198Updated last month
- ☆123Updated 11 months ago
- Implementation of the Quiet-STAR paper (https://arxiv.org/pdf/2403.09629.pdf)☆54Updated last year
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate" [COLM 2025]☆178Updated 6 months ago
- open-source code for paper: Retrieval Head Mechanistically Explains Long-Context Factuality☆230Updated last year
- BABILong is a benchmark for LLM evaluation using the needle-in-a-haystack approach.☆238Updated 4 months ago
- ☆313Updated last year
- ☆74Updated 11 months ago
- ☆107Updated last year
- ☆99Updated last year
- Evaluating LLMs with fewer examples☆169Updated last year
- "Improving Mathematical Reasoning with Process Supervision" by OPENAI☆114Updated last week
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]☆110Updated 11 months ago
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆183Updated 8 months ago
- ☆328Updated 8 months ago
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆455Updated last year
- Code and example data for the paper: Rule Based Rewards for Language Model Safety☆205Updated last year
- [NeurIPS'24 Spotlight] Observational Scaling Laws☆58Updated last year
- [ICLR 2026] Learning to Reason without External Rewards☆388Updated this week
- Automatic evals for LLMs☆578Updated last month
- RewardBench: the first evaluation tool for reward models.☆683Updated 2 weeks ago
- ☆139Updated last year
- Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".☆223Updated last month
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆189Updated 10 months ago