peluche / deck-of-many-promptsLinks
Manual Prompt Injection / Red Teaming Tool
☆37Updated 11 months ago
Alternatives and similar repositories for deck-of-many-prompts
Users that are interested in deck-of-many-prompts are comparing it to the libraries listed below
Sorting:
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)☆134Updated 9 months ago
- LLM | Security | Operations in one github repo with good links and pictures.☆55Updated 8 months ago
- Tree of Attacks (TAP) Jailbreaking Implementation☆115Updated last year
- Payloads for Attacking Large Language Models☆99Updated 3 months ago
- Prompt Injections Everywhere☆146Updated last year
- 🤖🛡️🔍🔒🔑 Tiny package designed to support red teams and penetration testers in exploiting large language model AI solutions.☆25Updated last year
- Cybersecurity Intelligent Pentesting Helper for Ethical Researcher (CIPHER). Fine tuned LLM for penetration testing guidance based on wri…☆31Updated 8 months ago
- A YAML based format for describing tools to LLMs, like man pages but for robots!☆78Updated 4 months ago
- https://arxiv.org/abs/2412.02776☆62Updated 9 months ago
- Here Comes the AI Worm: Preventing the Propagation of Adversarial Self-Replicating Prompts Within GenAI Ecosystems☆205Updated last week
- A LLM explicitly designed for getting hacked☆160Updated 2 years ago
- A list of curated resources for people interested in AI Red Teaming, Jailbreaking, and Prompt Injection☆315Updated 4 months ago
- [IJCAI 2024] Imperio is an LLM-powered backdoor attack. It allows the adversary to issue language-guided instructions to control the vict…☆41Updated 7 months ago
- ☆54Updated this week
- A guide to LLM hacking: fundamentals, prompt injection, offense, and defense☆168Updated 2 years ago
- A Productivity-Boosting Burp Suite extension written in Kotlin that enables persistent sticky session handling in web application testing…☆12Updated last week
- ☆19Updated 9 months ago
- All things specific to LLM Red Teaming Generative AI☆28Updated 10 months ago
- ☆98Updated 4 months ago
- Experimental tools to backdoor large language models by re-writing their system prompts at a raw parameter level. This allows you to pote…☆185Updated 5 months ago
- ☆68Updated this week
- Repo with random useful scripts, utilities, prompts and stuff☆165Updated last month
- Reference notes for Attacking and Defending Generative AI presentation☆65Updated last year
- Code Repository for: AIRTBench: Measuring Autonomous AI Red Teaming Capabilities in Language Models☆77Updated this week
- NOT for educational purposes: An MCP server for professional penetration testers including STDIO/HTTP/SSE support, nmap, go/dirbuster, ni…☆86Updated 2 months ago
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.☆164Updated last year
- This repository contains various attack against Large Language Models.☆114Updated last year
- Learn about a type of vulnerability that specifically targets machine learning models☆336Updated this week
- An example vulnerable app that integrates an LLM☆24Updated last year
- Stage 1: Sensitive Email/Chat Classification for Adversary Agent Emulation (espionage). This project is meant to extend Red Reaper v1 whi…☆42Updated last year