peluche / deck-of-many-promptsLinks
Manual Prompt Injection / Red Teaming Tool
☆49Updated last year
Alternatives and similar repositories for deck-of-many-prompts
Users that are interested in deck-of-many-prompts are comparing it to the libraries listed below
Sorting:
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)☆150Updated 11 months ago
- LLM | Security | Operations in one github repo with good links and pictures.☆69Updated this week
- Cybersecurity Intelligent Pentesting Helper for Ethical Researcher (CIPHER). Fine tuned LLM for penetration testing guidance based on wri…☆34Updated 11 months ago
- Prompt Injections Everywhere☆169Updated last year
- https://arxiv.org/abs/2412.02776☆66Updated last year
- Writeups of challenges and CTFs I participated in☆84Updated 3 months ago
- A LLM explicitly designed for getting hacked☆163Updated 2 years ago
- Tree of Attacks (TAP) Jailbreaking Implementation☆116Updated last year
- Penetration Testing AI Assistant based on open source LLMs.☆111Updated 8 months ago
- The notebook for my talk - ChatGPT: Your Red Teaming Ally☆50Updated 2 years ago
- Payloads for Attacking Large Language Models☆112Updated 6 months ago
- 🤖🛡️🔍🔒🔑 Tiny package designed to support red teams and penetration testers in exploiting large language model AI solutions.☆27Updated last year
- A guide to LLM hacking: fundamentals, prompt injection, offense, and defense☆175Updated 2 years ago
- Reference notes for Attacking and Defending Generative AI presentation☆67Updated last year
- NOT for educational purposes: An MCP server for professional penetration testers including STDIO/HTTP/SSE support, nmap, go/dirbuster, ni…☆104Updated 5 months ago
- ☆100Updated 2 weeks ago
- A YAML based format for describing tools to LLMs, like man pages but for robots!☆81Updated 7 months ago
- [IJCAI 2024] Imperio is an LLM-powered backdoor attack. It allows the adversary to issue language-guided instructions to control the vict…☆43Updated 9 months ago
- Stage 1: Sensitive Email/Chat Classification for Adversary Agent Emulation (espionage). This project is meant to extend Red Reaper v1 whi…☆42Updated last year
- Short list of indirect prompt injection attacks for OpenAI-based models.☆36Updated 3 months ago
- All things specific to LLM Red Teaming Generative AI☆29Updated last year
- ☆66Updated 2 weeks ago
- ☆80Updated 3 months ago
- An example vulnerable app that integrates an LLM☆25Updated last year
- A list of curated resources for people interested in AI Red Teaming, Jailbreaking, and Prompt Injection☆402Updated 7 months ago
- ☆29Updated 2 years ago
- Repo with random useful scripts, utilities, prompts and stuff☆189Updated 2 weeks ago
- Here Comes the AI Worm: Preventing the Propagation of Adversarial Self-Replicating Prompts Within GenAI Ecosystems☆220Updated 3 months ago
- ☆21Updated 11 months ago
- A Completely Modular LLM Reverse Engineering, Red Teaming, and Vulnerability Research Framework.☆52Updated last year