StavC / ComPromptMized
ComPromptMized: Unleashing Zero-click Worms that Target GenAI-Powered Applications
☆193Updated 8 months ago
Related projects ⓘ
Alternatives and complementary repositories for ComPromptMized
- Lightweight LLM Interaction Framework☆210Updated this week
- HoneyAgents is a PoC demo of an AI-driven system that combines honeypots with autonomous AI agents to detect and mitigate cyber threats. …☆39Updated 10 months ago
- Project Mantis: Hacking Back the AI-Hacker; Prompt Injection as a Defense Against LLM-driven Cyberattacks☆50Updated this week
- Red-Teaming Language Models with DSPy☆142Updated 7 months ago
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆405Updated last month
- Dropbox LLM Security research code and results☆217Updated 6 months ago
- Awesome products for securing AI systems includes open source and commercial options and an infographic licensed CC-BY-SA-4.0.☆48Updated 5 months ago
- Tree of Attacks (TAP) Jailbreaking Implementation☆95Updated 9 months ago
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆316Updated 9 months ago
- A LLM explicitly designed for getting hacked☆131Updated last year
- 🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed …☆234Updated 9 months ago
- The project serves as a strategic advisory tool, capitalizing on the ZySec series of AI models to amplify the capabilities of security pr…☆40Updated 6 months ago
- ☆62Updated last month
- ☆21Updated this week
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.☆149Updated last year
- Payloads for Attacking Large Language Models☆64Updated 4 months ago
- ☆187Updated this week
- A Completely Modular LLM Reverse Engineering, Red Teaming, and Vulnerability Research Framework.☆26Updated 2 weeks ago
- Protection against Model Serialization Attacks☆320Updated this week
- Codebase of https://arxiv.org/abs/2410.14923☆30Updated last month
- Prompt Injections Everywhere☆87Updated 3 months ago
- Stage 1: Sensitive Email/Chat Classification for Adversary Agent Emulation (espionage). This project is meant to extend Red Reaper v1 whi…☆23Updated 3 months ago
- A subset of jailbreaks automatically discovered by the Haize Labs haizing suite.☆86Updated 5 months ago
- Test Software for the Characterization of AI Technologies☆227Updated this week
- Every practical and proposed defense against prompt injection.☆347Updated 5 months ago
- ☆181Updated 10 months ago
- A guide to LLM hacking: fundamentals, prompt injection, offense, and defense☆121Updated last year
- Automated vulnerability discovery and annotation☆63Updated 3 months ago
- XBOW Validation Benchmarks☆53Updated 2 months ago
- ☆29Updated 3 weeks ago