evaleval / every_eval_everView on GitHub
Every Eval Ever is a shared schema and crowdsourced eval database. It defines a standardized metadata format for storing AI evaluation results — from leaderboard scrapes and research papers to local evaluation runs — so that results from different frameworks can be compared, reproduced, and reused.
43Mar 21, 2026Updated this week

Alternatives and similar repositories for every_eval_ever

Users that are interested in every_eval_ever are comparing it to the libraries listed below

Sorting:

Are these results useful?