WildEval / ZeroEval
A simple unified framework for evaluating LLMs
β204Updated 2 weeks ago
Alternatives and similar repositories for ZeroEval:
Users that are interested in ZeroEval are comparing it to the libraries listed below
- Benchmarking LLMs with Challenging Tasks from Real Usersβ218Updated 4 months ago
- πΎ OAT: A research-friendly framework for LLM online alignment, including preference learning, reinforcement learning, etc.β224Updated 2 weeks ago
- β156Updated 2 weeks ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.β164Updated 2 weeks ago
- The official evaluation suite and dynamic data release for MixEval.β233Updated 4 months ago
- open-source code for paper: Retrieval Head Mechanistically Explains Long-Context Factualityβ179Updated 7 months ago
- β111Updated last month
- RewardBench: the first evaluation tool for reward models.β526Updated 3 weeks ago
- Reproducible, flexible LLM evaluationsβ176Updated 3 months ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.β168Updated 2 months ago
- LOFT: A 1 Million+ Token Long-Context Benchmarkβ180Updated last week
- Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".β196Updated this week
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)β206Updated 10 months ago
- LongEmbed: Extending Embedding Models for Long Context Retrieval (EMNLP 2024)β131Updated 4 months ago
- Evaluating LLMs with fewer examplesβ147Updated 11 months ago
- β307Updated 9 months ago
- Code and results accompanying the paper "Refusal in Language Models Is Mediated by a Single Direction".β192Updated 5 months ago
- BABILong is a benchmark for LLM evaluation using the needle-in-a-haystack approach.β193Updated this week
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'β186Updated 3 months ago
- The official repo for "LLoCo: Learning Long Contexts Offline"β116Updated 9 months ago
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]β102Updated last month
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"β166Updated 2 weeks ago
- Implementation of the Quiet-STAR paper (https://arxiv.org/pdf/2403.09629.pdf)β53Updated 7 months ago
- [NeurIPS'24 Spotlight] Observational Scaling Lawsβ53Updated 5 months ago
- Codes for the paper "βBench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718β313Updated 5 months ago
- Code and example data for the paper: Rule Based Rewards for Language Model Safetyβ183Updated 8 months ago
- Self-playing Adversarial Language Game Enhances LLM Reasoning, NeurIPS 2024β122Updated last month
- β312Updated 6 months ago
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"β298Updated last year