WildEval / ZeroEvalLinks
A simple unified framework for evaluating LLMs
☆235Updated 3 months ago
Alternatives and similar repositories for ZeroEval
Users that are interested in ZeroEval are comparing it to the libraries listed below
Sorting:
- Benchmarking LLMs with Challenging Tasks from Real Users☆233Updated 9 months ago
- Reproducible, flexible LLM evaluations☆226Updated 3 weeks ago
- The HELMET Benchmark☆161Updated 3 months ago
- The official evaluation suite and dynamic data release for MixEval.☆242Updated 8 months ago
- ☆187Updated 3 months ago
- BABILong is a benchmark for LLM evaluation using the needle-in-a-haystack approach.☆208Updated 2 months ago
- LOFT: A 1 Million+ Token Long-Context Benchmark☆207Updated last month
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆225Updated 2 weeks ago
- ☆117Updated 5 months ago
- 🌾 OAT: A research-friendly framework for LLM online alignment, including reinforcement learning, preference learning, etc.☆418Updated last week
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate" [COLM 2025]☆169Updated 3 weeks ago
- Code and example data for the paper: Rule Based Rewards for Language Model Safety☆190Updated last year
- ☆91Updated 8 months ago
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆306Updated last year
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆175Updated 4 months ago
- A Collection of Competitive Text-Based Games for Language Model Evaluation and Reinforcement Learning☆225Updated this week
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆205Updated last year
- "Improving Mathematical Reasoning with Process Supervision" by OPENAI☆112Updated 2 weeks ago
- open-source code for paper: Retrieval Head Mechanistically Explains Long-Context Factuality☆205Updated last year
- Evaluating LLMs with fewer examples☆160Updated last year
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]☆105Updated 5 months ago
- ☆99Updated last year
- ☆311Updated last year
- ☆135Updated 8 months ago
- Implementation of the Quiet-STAR paper (https://arxiv.org/pdf/2403.09629.pdf)☆54Updated 11 months ago
- ☆237Updated 11 months ago
- 🌍 Repository for "AppWorld: A Controllable World of Apps and People for Benchmarking Interactive Coding Agent", ACL'24 Best Resource Pap…☆232Updated 2 months ago
- [NeurIPS'24 Spotlight] Observational Scaling Laws☆56Updated 10 months ago
- Automatic evals for LLMs☆496Updated last month
- [ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning☆359Updated 10 months ago