verazuo / prompt-stealing-attack
[USENIX'24] Prompt Stealing Attacks Against Text-to-Image Generation Models
☆22Updated last month
Related projects ⓘ
Alternatives and complementary repositories for prompt-stealing-attack
- The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".☆32Updated 2 weeks ago
- ☆43Updated last year
- Code&Data for the paper "Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents" [NeurIPS 2024]☆44Updated last month
- Improved techniques for optimization-based jailbreaking on large language models☆42Updated 5 months ago
- ☆20Updated 9 months ago
- ☆18Updated 11 months ago
- PAL: Proxy-Guided Black-Box Attack on Large Language Models☆46Updated 2 months ago
- Code for the paper "BadPrompt: Backdoor Attacks on Continuous Prompts"☆35Updated 4 months ago
- Repo for the research paper "Aligning LLMs to Be Robust Against Prompt Injection"☆18Updated 2 weeks ago
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆59Updated this week
- A list of recent papers about adversarial learning☆71Updated last week
- [NeurIPS 2024] Official implementation for "AgentPoison: Red-teaming LLM Agents via Memory or Knowledge Base Backdoor Poisoning"☆57Updated 3 months ago
- A curated list of trustworthy Generative AI papers. Daily updating...☆67Updated 2 months ago
- Code for Voice Jailbreak Attacks Against GPT-4o.☆25Updated 5 months ago
- Package to optimize Adversarial Attacks against (Large) Language Models with Varied Objectives☆63Updated 8 months ago
- Code repo of our paper Towards Understanding Jailbreak Attacks in LLMs: A Representation Space Analysis (https://arxiv.org/abs/2406.10794…☆11Updated 3 months ago
- ☆22Updated last year
- ☆86Updated 8 months ago
- ☆53Updated last year
- BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks on Large Language Models☆72Updated 2 months ago
- ☆13Updated 2 years ago
- Jailbreaking Large Vision-language Models via Typographic Visual Prompts☆85Updated 6 months ago
- ☆14Updated 6 months ago
- JailBreakV-28K: A comprehensive benchmark designed to evaluate the transferability of LLM jailbreak attacks to MLLMs, and further assess …☆35Updated 4 months ago
- ☆20Updated 2 months ago
- ☆76Updated 3 years ago
- LLM Self Defense: By Self Examination, LLMs know they are being tricked☆26Updated 5 months ago
- ☆31Updated last year
- Code for Neurips 2024 paper "Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models"☆28Updated last month
- ☆35Updated 9 months ago