WildEval / ZeroEvalLinks
A simple unified framework for evaluating LLMs
☆215Updated last month
Alternatives and similar repositories for ZeroEval
Users that are interested in ZeroEval are comparing it to the libraries listed below
Sorting:
- Benchmarking LLMs with Challenging Tasks from Real Users☆223Updated 7 months ago
- The HELMET Benchmark☆149Updated last month
- LOFT: A 1 Million+ Token Long-Context Benchmark☆198Updated last month
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆193Updated 6 months ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆173Updated 2 months ago
- ☆174Updated last month
- Reproducible, flexible LLM evaluations☆204Updated 3 weeks ago
- ☆114Updated 3 months ago
- 🌾 OAT: A research-friendly framework for LLM online alignment, including reinforcement learning, preference learning, etc.☆367Updated this week
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate"☆151Updated last month
- Evaluating LLMs with fewer examples☆155Updated last year
- A Collection of Competitive Text-Based Games for Language Model Evaluation and Reinforcement Learning