peluche / deck-of-many-prompts
Manual Prompt Injection / Red Teaming Tool
☆24Updated 5 months ago
Alternatives and similar repositories for deck-of-many-prompts:
Users that are interested in deck-of-many-prompts are comparing it to the libraries listed below
- A steganography tool for automatically encoding images that act as prompt injections/jailbreaks for AIs with code interpreter and vision.☆67Updated 5 months ago
- OllaDeck is a purple technology stack for Generative AI (text modality) cybersecurity. It provides a comprehensive set of tools for both …☆15Updated 6 months ago
- A utility to inspect, validate, sign and verify machine learning model files.☆54Updated last month
- A YAML based format for describing tools to LLMs, like man pages but for robots!☆66Updated last month
- https://arxiv.org/abs/2412.02776☆49Updated 3 months ago
- ☆64Updated 2 months ago
- HoneyAgents is a PoC demo of an AI-driven system that combines honeypots with autonomous AI agents to detect and mitigate cyber threats. …☆42Updated last year
- Python library for Entities, relationships and schemas extraction from documents☆37Updated 3 months ago
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)☆91Updated 3 months ago
- MCP server for querying the Shodan API☆14Updated 3 weeks ago
- A collection of prompt injection mitigation techniques.☆20Updated last year
- A trial-and-error approach to temperature opimization for LLMs. Runs the same prompt at many temperatures and selects the best output aut…☆49Updated last year
- CLI and API server for https://github.com/dreadnode/robopages☆32Updated last week
- Red-Teaming Language Models with DSPy☆175Updated last month
- This is a repository to experiment with MCP for security☆15Updated 2 months ago
- 🤖 A GitHub action that leverages fabric patterns through an agent-based approach☆20Updated 2 months ago
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆108Updated last year
- ☆19Updated last year
- A curated list of my GitHub stars!☆18Updated 2 weeks ago
- Payloads for Attacking Large Language Models☆77Updated 8 months ago
- 🤖🛡️🔍🔒🔑 Tiny package designed to support red teams and penetration testers in exploiting large language model AI solutions.☆23Updated 10 months ago
- A tool for testing the efficacy of prompts and prompt + model combinations.☆65Updated 7 months ago
- Stage 1: Sensitive Email/Chat Classification for Adversary Agent Emulation (espionage). This project is meant to extend Red Reaper v1 whi…☆39Updated 7 months ago
- Cybersecurity Intelligent Pentesting Helper for Ethical Researcher (CIPHER). Fine tuned LLM for penetration testing guidance based on wri…☆19Updated 3 months ago
- A Completely Modular LLM Reverse Engineering, Red Teaming, and Vulnerability Research Framework.☆46Updated 4 months ago
- Codebase of https://arxiv.org/abs/2410.14923☆44Updated 5 months ago
- LLM OSINT is a proof-of-concept method of using LLMs to gather information from the internet and then perform a task with this informatio…☆183Updated 4 months ago
- De-redacting Elon's Email with Character-count Constrained Llama2 Decoding☆10Updated last year
- Prompt leak technique for Bing Chat☆31Updated last year
- A subset of jailbreaks automatically discovered by the Haize Labs haizing suite.☆89Updated 9 months ago