StavC / ComPromptMized
ComPromptMized: Unleashing Zero-click Worms that Target GenAI-Powered Applications
☆189Updated 6 months ago
Related projects: ⓘ
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆299Updated 7 months ago
- Red-Teaming Language Models with DSPy☆116Updated 5 months ago
- A subset of jailbreaks automatically discovered by the Haize Labs haizing suite.☆77Updated 3 months ago
- Dropbox LLM Security research code and results☆210Updated 3 months ago
- This repository contains various attack against Large Language Models.☆68Updated 3 months ago
- HoneyAgents is a PoC demo of an AI-driven system that combines honeypots with autonomous AI agents to detect and mitigate cyber threats. …☆34Updated 8 months ago
- Lightweight LLM Interaction Framework☆181Updated this week
- A benchmark for prompt injection detection systems.☆80Updated last week
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆103Updated 6 months ago
- Every practical and proposed defense against prompt injection.☆310Updated 3 months ago
- ☆164Updated 8 months ago
- A guide to LLM hacking: fundamentals, prompt injection, offense, and defense☆112Updated last year
- A curated list of MLSecOps tools, articles and other resources on security applied to Machine Learning and MLOps systems.☆220Updated last month
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [arXiv, Apr 2024]☆181Updated last month
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆360Updated last month
- Test Software for the Characterization of AI Technologies☆212Updated this week
- 🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed …☆218Updated 7 months ago
- A LLM explicitly designed for getting hacked☆121Updated last year
- Protection against Model Serialization Attacks☆273Updated this week
- Learn about a type of vulnerability that specifically targets machine learning models☆166Updated 3 months ago
- Payloads for Attacking Large Language Models☆56Updated 2 months ago
- ☆158Updated last month
- Tree of Attacks (TAP) Jailbreaking Implementation☆88Updated 7 months ago
- Awesome products for securing AI systems includes open source and commercial options and an infographic licensed CC-BY-SA-4.0.☆45Updated 3 months ago
- A toolkit for detecting and protecting against vulnerabilities in Large Language Models (LLMs).☆116Updated 8 months ago
- Persuasive Jailbreaker: we can persuade LLMs to jailbreak them!☆232Updated 5 months ago
- ☆58Updated 2 months ago
- LLM OSINT is a proof-of-concept method of using LLMs to gather information from the internet and then perform a task with this informatio…☆142Updated last month
- A trace analysis tool for AI agents.☆97Updated this week
- This repository provides implementation to formalize and benchmark Prompt Injection attacks and defenses☆125Updated 2 weeks ago