WildEval / ZeroEvalLinks
A simple unified framework for evaluating LLMs
☆240Updated 4 months ago
Alternatives and similar repositories for ZeroEval
Users that are interested in ZeroEval are comparing it to the libraries listed below
Sorting:
- Benchmarking LLMs with Challenging Tasks from Real Users☆237Updated 9 months ago
- Reproducible, flexible LLM evaluations☆237Updated last month
- ☆187Updated 4 months ago
- LOFT: A 1 Million+ Token Long-Context Benchmark☆209Updated 2 months ago
- The HELMET Benchmark☆165Updated last week
- The official evaluation suite and dynamic data release for MixEval.☆244Updated 9 months ago
- 🌾 OAT: A research-friendly framework for LLM online alignment, including reinforcement learning, preference learning, etc.☆433Updated last week
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆228Updated last month
- BABILong is a benchmark for LLM evaluation using the needle-in-a-haystack approach.☆210Updated 3 months ago
- open-source code for paper: Retrieval Head Mechanistically Explains Long-Context Factuality☆207Updated last year
- ☆120Updated 6 months ago
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate" [COLM 2025]☆170Updated last month
- Code and example data for the paper: Rule Based Rewards for Language Model Safety☆193Updated last year
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆219Updated 5 months ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆177Updated 5 months ago
- A Collection of Competitive Text-Based Games for Language Model Evaluation and Reinforcement Learning☆245Updated last week
- ☆311Updated last year
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]☆106Updated 6 months ago
- Replicating O1 inference-time scaling laws☆89Updated 8 months ago
- ☆91Updated 9 months ago
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆308Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆206Updated last year
- Evaluating LLMs with fewer examples☆160Updated last year
- Automatic evals for LLMs☆519Updated last month
- ☆100Updated last year
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasks☆234Updated 3 months ago
- Code and results accompanying the paper "Refusal in Language Models Is Mediated by a Single Direction".☆257Updated 2 months ago
- ☆38Updated 4 months ago
- Official repository for R2E-Gym: Procedural Environment Generation and Hybrid Verifiers for Scaling Open-Weights SWE Agents☆145Updated last month
- ☆50Updated 3 months ago