jiangnanboy / llm_securityLinks

利用分类法和敏感词检测法对生成式大模型的输入和输出内容进行安全检测,尽早识别风险内容。The input and output contents of generative large model are checked by classification method and sensitive word detection method to identify content risk as early as possible.
18Updated 10 months ago

Alternatives and similar repositories for llm_security

Users that are interested in llm_security are comparing it to the libraries listed below

Sorting: