MLSZHU / LLMSafetyBenchmarkView on GitHub
A comprehensive framework for assessing the security capabilities of large language models (LLMs) through multi-dimensional testing.
77May 15, 2025Updated 10 months ago

Alternatives and similar repositories for LLMSafetyBenchmark

Users that are interested in LLMSafetyBenchmark are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.

Sorting:

Are these results useful?