WildEval / ZeroEvalLinks
A simple unified framework for evaluating LLMs
☆219Updated 2 months ago
Alternatives and similar repositories for ZeroEval
Users that are interested in ZeroEval are comparing it to the libraries listed below
Sorting:
- Benchmarking LLMs with Challenging Tasks from Real Users☆226Updated 7 months ago
- The official evaluation suite and dynamic data release for MixEval.☆242Updated 7 months ago
- ☆180Updated 2 months ago
- 🌾 OAT: A research-friendly framework for LLM online alignment, including reinforcement learning, preference learning, etc.☆379Updated 2 weeks ago
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate"☆159Updated 2 weeks ago
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆218Updated 6 months ago
- ☆115Updated 4 months ago
- ☆310Updated last year
- ☆97Updated 11 months ago
- open-source code for paper: Retrieval Head Mechanistically Explains Long-Context Factuality☆200Updated 10 months ago
- Reproducible, flexible LLM evaluations☆213Updated last month
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆173Updated 3 months ago
- Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".☆205Updated 2 weeks ago
- SkyRL-v0: Train Real-World Long-Horizon Agents via Reinforcement Learning☆422Updated this week
- LOFT: A 1 Million+ Token Long-Context Benchmark☆201Updated last week
- Implementation of the Quiet-STAR paper (https://arxiv.org/pdf/2403.09629.pdf)☆54Updated 10 months ago
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆189Updated 3 months ago
- The HELMET Benchmark☆154Updated 2 months ago
- A Collection of Competitive Text-Based Games for Language Model Evaluation and Reinforcement Learning☆184Updated this week
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasks☆219Updated last month
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆173Updated 5 months ago
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆158Updated last month
- BABILong is a benchmark for LLM evaluation using the needle-in-a-haystack approach.☆203Updated last month
- ☆300Updated 3 weeks ago
- Code and results accompanying the paper "Refusal in Language Models Is Mediated by a Single Direction".☆233Updated last week
- RewardBench: the first evaluation tool for reward models.☆604Updated last week
- Code and example data for the paper: Rule Based Rewards for Language Model Safety☆188Updated 11 months ago
- ☆190Updated 2 months ago
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆144Updated 7 months ago
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆303Updated last year