nukIeer / AI-Prompt-Injection-CheatsheetView on GitHub
AI hacking snippets for prompt injection, jailbreaking LLMs, and bypassing AI filters. Ideal for ethical hackers and security researchers testing AI security vulnerabilities. One README.md with practical AI prompt engineering tips. (180 chars) Keywords: AI hacking, prompt injection, LLM jailbreaking, AI security, ethical hacking.
45Nov 10, 2025Updated 4 months ago

Alternatives and similar repositories for AI-Prompt-Injection-Cheatsheet

Users that are interested in AI-Prompt-Injection-Cheatsheet are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.

Sorting:

Are these results useful?