WildEval / ZeroEvalLinks
A simple unified framework for evaluating LLMs
☆254Updated 7 months ago
Alternatives and similar repositories for ZeroEval
Users that are interested in ZeroEval are comparing it to the libraries listed below
Sorting:
- Benchmarking LLMs with Challenging Tasks from Real Users☆244Updated last year
- ☆197Updated 6 months ago
- Reproducible, flexible LLM evaluations☆264Updated 2 weeks ago
- The official evaluation suite and dynamic data release for MixEval.☆252Updated last year
- BABILong is a benchmark for LLM evaluation using the needle-in-a-haystack approach.☆215Updated 2 months ago
- Implementation of the Quiet-STAR paper (https://arxiv.org/pdf/2403.09629.pdf)☆54Updated last year
- ☆314Updated last year
- LOFT: A 1 Million+ Token Long-Context Benchmark☆219Updated 5 months ago
- The HELMET Benchmark☆179Updated 2 months ago
- open-source code for paper: Retrieval Head Mechanistically Explains Long-Context Factuality☆218Updated last year
- ☆326Updated 5 months ago
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆232Updated 3 months ago
- ☆124Updated 8 months ago
- ☆139Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆204Updated last year
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate" [COLM 2025]☆178Updated 4 months ago
- 🌾 OAT: A research-friendly framework for LLM online alignment, including reinforcement learning, preference learning, etc.☆564Updated 2 weeks ago
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆311Updated last year
- ☆103Updated last year
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆176Updated 5 months ago
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆237Updated 2 months ago
- Self-playing Adversarial Language Game Enhances LLM Reasoning, NeurIPS 2024☆141Updated 8 months ago
- RewardBench: the first evaluation tool for reward models.☆649Updated 5 months ago
- Code and example data for the paper: Rule Based Rewards for Language Model Safety☆202Updated last year
- "Improving Mathematical Reasoning with Process Supervision" by OPENAI☆111Updated 3 weeks ago
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆443Updated last year
- ☆215Updated 7 months ago
- ☆77Updated 8 months ago
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆149Updated last year
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆355Updated last year