nukIeer / AI-Prompt-Injection-Cheatsheet
View external linksLinks

AI hacking snippets for prompt injection, jailbreaking LLMs, and bypassing AI filters. Ideal for ethical hackers and security researchers testing AI security vulnerabilities. One README.md with practical AI prompt engineering tips. (180 chars) Keywords: AI hacking, prompt injection, LLM jailbreaking, AI security, ethical hacking.
32Nov 10, 2025Updated 3 months ago

Alternatives and similar repositories for AI-Prompt-Injection-Cheatsheet

Users that are interested in AI-Prompt-Injection-Cheatsheet are comparing it to the libraries listed below

Sorting:

Are these results useful?