utkusen / promptmap
automatically tests prompt injection attacks on ChatGPT instances
☆612Updated 9 months ago
Related projects: ⓘ
- Dropbox LLM Security research code and results☆210Updated 3 months ago
- Prompt Injection Primer for Engineers☆348Updated last year
- Uses ChatGPT API, Bard API, and Llama2, Python-Nmap, DNS Recon, PCAP and JWT recon modules and uses the GPT3 model to create vulnerabilit…☆469Updated 2 months ago
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆299Updated 7 months ago
- Every practical and proposed defense against prompt injection.☆310Updated 3 months ago
- LLM vulnerability scanner☆1,273Updated this week
- A curated list of awesome security tools, experimental case or other interesting things with LLM or GPT.☆538Updated 2 months ago
- Learn about a type of vulnerability that specifically targets machine learning models☆166Updated 3 months ago
- OWASP Foundation Web Respository☆504Updated last week
- LLM Prompt Injection Detector☆1,067Updated last month
- A guide to LLM hacking: fundamentals, prompt injection, offense, and defense☆112Updated last year
- Agentic LLM Vulnerability Scanner / AI red teaming kit☆684Updated last week
- 🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed …☆218Updated 7 months ago
- A curated list of large language model tools for cybersecurity research.☆376Updated 5 months ago
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆360Updated 3 weeks ago
- ☆164Updated 8 months ago
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆293Updated 6 months ago
- Learn AI security through a series of vulnerable LLM CTF challenges. No sign ups, no cloud fees, run everything locally on your system.☆147Updated 3 weeks ago
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.☆143Updated 11 months ago
- some prompt about cyber security☆137Updated last year
- A LLM explicitly designed for getting hacked☆121Updated last year
- ☆357Updated last month
- Helping Ethical Hackers use LLMs in 50 Lines of Code or less..☆392Updated 2 weeks ago
- Protection against Model Serialization Attacks☆273Updated this week
- A collection of awesome resources related AI security☆107Updated 5 months ago
- AttackGen is a cybersecurity incident response testing tool that leverages the power of large language models and the comprehensive MITRE…☆906Updated last month
- A curated list of MLSecOps tools, articles and other resources on security applied to Machine Learning and MLOps systems.☆220Updated last month
- New ways of breaking app-integrated LLMs☆1,799Updated last year
- Prompt Injections Everywhere☆68Updated last month
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆366Updated 5 months ago