verazuo / prompt-stealing-attackLinks
[USENIX'24] Prompt Stealing Attacks Against Text-to-Image Generation Models
☆49Updated last year
Alternatives and similar repositories for prompt-stealing-attack
Users that are interested in prompt-stealing-attack are comparing it to the libraries listed below
Sorting:
- This is the code repository of our submission: Understanding the Dark Side of LLMs’ Intrinsic Self-Correction.☆63Updated last year
- ☆56Updated last year
- ☆37Updated last year
- ☆37Updated last year
- Code for ACM MM2024 paper: White-box Multimodal Jailbreaks Against Large Vision-Language Models☆31Updated last year
- Code for "When LLM Meets DRL: Advancing Jailbreaking Efficiency via DRL-guided Search" (NeurIPS 2024)☆17Updated last year
- A toolbox for backdoor attacks.☆23Updated 3 years ago
- ☆58Updated last year
- ☆18Updated 3 years ago
- ☆83Updated 4 years ago
- MASTERKEY is a framework designed to explore and exploit vulnerabilities in large language model chatbots by automating jailbreak attacks…☆29Updated last year
- Distribution Preserving Backdoor Attack in Self-supervised Learning☆20Updated 2 years ago
- A curated list of trustworthy Generative AI papers. Daily updating...☆75Updated last year
- ☆109Updated last year
- Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks (IEEE S&P 2024)☆34Updated 7 months ago
- [NDSS'25] The official implementation of safety misalignment.☆17Updated last year
- ☆38Updated 8 months ago
- [AAAI'25 (Oral)] Jailbreaking Large Vision-language Models via Typographic Visual Prompts☆190Updated 7 months ago
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking …☆34Updated last year
- This is an official repository for Practical Membership Inference Attacks Against Large-Scale Multi-Modal Models: A Pilot Study (ICCV2023…☆24Updated 2 years ago
- Official codebase for Image Hijacks: Adversarial Images can Control Generative Models at Runtime☆54Updated 2 years ago
- Code for Neurips 2024 paper "Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models"☆58Updated last year
- [AAAI'21] Deep Feature Space Trojan Attack of Neural Networks by Controlled Detoxification☆29Updated last year
- This is the source code for MEA-Defender. Our paper is accepted by the IEEE Symposium on Security and Privacy (S&P) 2024.☆29Updated 2 years ago
- Reconstructive Neuron Pruning for Backdoor Defense (ICML 2023)☆39Updated 2 years ago
- Accept by CVPR 2025 (highlight)☆22Updated 7 months ago
- ☆33Updated 2 months ago
- [ICLR 2024] Inducing High Energy-Latency of Large Vision-Language Models with Verbose Images☆42Updated 2 years ago
- Code for paper: "PromptCARE: Prompt Copyright Protection by Watermark Injection and Verification", IEEE S&P 2024.☆34Updated last year
- ☆71Updated 10 months ago