MLSZHU / LLMSafetyBenchmarkView on GitHub
A comprehensive framework for assessing the security capabilities of large language models (LLMs) through multi-dimensional testing.
77May 15, 2025Updated 9 months ago

Alternatives and similar repositories for LLMSafetyBenchmark

Users that are interested in LLMSafetyBenchmark are comparing it to the libraries listed below

Sorting:

Are these results useful?