openai / evals

Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.
15,049Updated last month

Related projects

Alternatives and complementary repositories for evals