openai / evalsView on GitHub
Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.
17,889Nov 3, 2025Updated 3 months ago

Alternatives and similar repositories for evals

Users that are interested in evals are comparing it to the libraries listed below

Sorting:

Are these results useful?