evaleval / every_eval_everView on GitHub
Every Eval Ever is a shared schema and crowdsourced eval database. It defines a standardized metadata format for storing AI evaluation results — from leaderboard scrapes and research papers to local evaluation runs — so that results from different frameworks can be compared, reproduced, and reused.
49Mar 29, 2026Updated last week

Alternatives and similar repositories for every_eval_ever

Users that are interested in every_eval_ever are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.

Sorting:

Are these results useful?