MLSZHU / LLMSafetyBenchmarkLinks

A comprehensive framework for assessing the security capabilities of large language models (LLMs) through multi-dimensional testing.
74Updated 3 weeks ago

Alternatives and similar repositories for LLMSafetyBenchmark

Users that are interested in LLMSafetyBenchmark are comparing it to the libraries listed below

Sorting: