MLSZHU / LLMSafetyBenchmark

A comprehensive framework for assessing the security capabilities of large language models (LLMs) through multi-dimensional testing.
72Updated this week

Alternatives and similar repositories for LLMSafetyBenchmark

Users that are interested in LLMSafetyBenchmark are comparing it to the libraries listed below

Sorting: