sinanw / llm-security-prompt-injection
View external linksLinks

This project investigates the security of large language models by performing binary classification of a set of input prompts to discover malicious prompts. Several approaches have been analyzed using classical ML algorithms, a trained LLM model, and a fine-tuned LLM model.
57Dec 18, 2023Updated 2 years ago

Alternatives and similar repositories for llm-security-prompt-injection

Users that are interested in llm-security-prompt-injection are comparing it to the libraries listed below

Sorting:

Are these results useful?