Divide-and-Conquer Attack: Harnessing the Power of LLM to Bypass the Censorship of Text-to-Image Generation Mode
☆18Feb 16, 2025Updated last year
Alternatives and similar repositories for daca
Users that are interested in daca are comparing it to the libraries listed below
Sorting:
- ☆197Apr 7, 2025Updated 10 months ago
- Official repository for "On the Multi-modal Vulnerability of Diffusion Models"☆16Jul 15, 2024Updated last year
- [ICML 2024] Prompting4Debugging: Red-Teaming Text-to-Image Diffusion Models by Finding Problematic Prompts (Official Pytorch Implementati…☆52Jan 11, 2026Updated last month
- ☆35May 22, 2024Updated last year
- ☆38Jan 15, 2025Updated last year
- ☆20Feb 3, 2025Updated last year
- This is the official repo of the paper "Latent Guard: a Safety Framework for Text-to-image Generation"☆52Oct 24, 2024Updated last year
- ☆13Jan 14, 2026Updated last month
- A collection of resources on attacks and defenses targeting text-to-image diffusion models☆92Dec 20, 2025Updated 2 months ago
- [CVPR23W] "A Pilot Study of Query-Free Adversarial Attack against Stable Diffusion" by Haomin Zhuang, Yihua Zhang and Sijia Liu☆26Aug 27, 2024Updated last year
- Pytorch implementation for the pilot study on the robustness of latent diffusion models.☆13Jun 20, 2023Updated 2 years ago
- ☆14Jun 6, 2023Updated 2 years ago
- ☆22Dec 14, 2023Updated 2 years ago
- Distribution Preserving Backdoor Attack in Self-supervised Learning☆20Jan 27, 2024Updated 2 years ago
- [AAAI'25 (Oral)] Jailbreaking Large Vision-language Models via Typographic Visual Prompts☆191Jun 26, 2025Updated 8 months ago
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking …☆38Oct 17, 2024Updated last year
- ☆46Jul 14, 2024Updated last year
- [CVPR2024] MMA-Diffusion: MultiModal Attack on Diffusion Models☆383Jan 8, 2026Updated last month
- [WWW '25] Model Supply Chain Poisoning: Backdooring Pre-trained Models via Embedding Indistinguishability☆18May 30, 2025Updated 9 months ago
- Universal Adversarial Perturbations for Vision-Language Pre-trained Models☆24Aug 8, 2025Updated 6 months ago
- The reinforcement learning codes for dataset SPA-VL☆44Jun 24, 2024Updated last year
- Repository for the Paper (AAAI 2024, Oral) --- Visual Adversarial Examples Jailbreak Large Language Models☆266May 13, 2024Updated last year
- The official implementation of ECCV'24 paper "To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Uns…☆87Feb 28, 2025Updated last year
- ☆18Sep 25, 2019Updated 6 years ago
- The official code of IEEE S&P 2024 paper "Why Does Little Robustness Help? A Further Step Towards Understanding Adversarial Transferabili…☆20Aug 22, 2024Updated last year
- This repository is the official implementation of the paper "ASSET: Robust Backdoor Data Detection Across a Multiplicity of Deep Learning…☆19Jun 7, 2023Updated 2 years ago
- ☆53May 24, 2023Updated 2 years ago
- ☆55Dec 7, 2024Updated last year
- ☆23Feb 5, 2026Updated 3 weeks ago
- ☆59Jun 5, 2024Updated last year
- A package that achieves 95%+ transfer attack success rate against GPT-4☆26Oct 24, 2024Updated last year
- An Embarrassingly Simple Backdoor Attack on Self-supervised Learning☆20Jan 24, 2024Updated 2 years ago
- ☆22Nov 19, 2021Updated 4 years ago
- Code for ICLR 2025 Failures to Find Transferable Image Jailbreaks Between Vision-Language Models☆37Jun 1, 2025Updated 8 months ago
- Official implement of paper: Stable Diffusion is Unstable☆23May 21, 2024Updated last year
- ☆30Sep 3, 2024Updated last year
- Accepted by ECCV 2024☆188Oct 15, 2024Updated last year
- official PyTorch implement of Towards Adversarial Attack on Vision-Language Pre-training Models☆65Mar 20, 2023Updated 2 years ago
- ☆28May 28, 2023Updated 2 years ago