utkusen / promptmap
a prompt injection scanner for custom LLM applications
☆740Updated this week
Alternatives and similar repositories for promptmap:
Users that are interested in promptmap are comparing it to the libraries listed below
- Prompt Injection Primer for Engineers☆417Updated last year
- Every practical and proposed defense against prompt injection.☆388Updated 8 months ago
- Uses ChatGPT API, Bard API, and Llama2, Python-Nmap, DNS Recon, PCAP and JWT recon modules and uses the GPT3 model to create vulnerabilit…☆519Updated 3 months ago
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆350Updated last year
- A curated list of awesome security tools, experimental case or other interesting things with LLM or GPT.☆574Updated last month
- Learn about a type of vulnerability that specifically targets machine learning models☆220Updated 8 months ago
- Dropbox LLM Security research code and results☆220Updated 9 months ago
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆435Updated 4 months ago
- A LLM explicitly designed for getting hacked☆136Updated last year
- OWASP Foundation Web Respository☆661Updated this week
- A collection of awesome resources related AI security☆174Updated 2 weeks ago
- some prompt about cyber security☆178Updated last year
- Prompt Injections Everywhere☆103Updated 6 months ago
- Multi-Lingual GenAI Red Teaming Tool☆23Updated 6 months ago
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.☆158Updated last year
- A guide to LLM hacking: fundamentals, prompt injection, offense, and defense☆140Updated last year
- Code for the website www.jailbreakchat.com☆82Updated last year
- OWASP Top 10 for Agentic AI (AI Agent Security) - Pre-release version☆51Updated this week
- LLMBUS red team tool 🚍☆32Updated last week
- Agentic LLM Vulnerability Scanner / AI red teaming kit 🧪☆1,078Updated this week
- Learn AI security through a series of vulnerable LLM CTF challenges. No sign ups, no cloud fees, run everything locally on your system.☆272Updated 6 months ago
- ☆197Updated last year
- LLM Prompt Injection Detector☆1,184Updated 6 months ago
- New ways of breaking app-integrated LLMs☆1,892Updated last year
- Payloads for Attacking Large Language Models☆74Updated 7 months ago
- A curated list of useful resources that cover Offensive AI.☆1,169Updated last week
- A benchmark for prompt injection detection systems.☆96Updated 2 weeks ago
- A curated list of large language model tools for cybersecurity research.☆430Updated 10 months ago
- Curation of prompts that are known to be adversarial to large language models☆179Updated 2 years ago
- The Security Toolkit for LLM Interactions☆1,422Updated last month