abacaj / code-evalLinks

Run evaluation on LLMs using human-eval benchmark
β˜†411Updated last year

Alternatives and similar repositories for code-eval

Users that are interested in code-eval are comparing it to the libraries listed below

Sorting: