YitingQu / unsafe-diffusion
β27Updated 6 months ago
Alternatives and similar repositories for unsafe-diffusion:
Users that are interested in unsafe-diffusion are comparing it to the libraries listed below
- β24Updated this week
- [ICLR 2024 Spotlight π₯ ] - [ Best Paper Award SoCal NLP 2023 π] - Jailbreak in pieces: Compositional Adversarial Attacks on Multi-Modalβ¦β34Updated 7 months ago
- β24Updated 7 months ago
- β28Updated 7 months ago
- β9Updated 4 months ago
- [ICLR 2024] Inducing High Energy-Latency of Large Vision-Language Models with Verbose Imagesβ25Updated 11 months ago
- The official implementation of ECCV'24 paper "To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Unsβ¦β64Updated 2 months ago
- [CVPR23W] "A Pilot Study of Query-Free Adversarial Attack against Stable Diffusion" by Haomin Zhuang, Yihua Zhang and Sijia Liuβ25Updated 4 months ago
- β33Updated last month
- β20Updated 4 months ago
- β40Updated 5 months ago
- List of T2I safety papers, updated daily, welcome to discuss using Discussionsβ54Updated 5 months ago
- This is an official repository of ``VLAttack: Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models'' (NeurIPS 2β¦β41Updated 2 months ago
- A collection of resources on attacks and defenses targeting text-to-image diffusion modelsβ53Updated last week
- Implementation of BadCLIP https://arxiv.org/pdf/2311.16194.pdfβ18Updated 9 months ago
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking β¦β15Updated 2 months ago
- AnyDoor: Test-Time Backdoor Attacks on Multimodal Large Language Modelsβ48Updated 9 months ago
- [CVPR 2023] Backdoor Defense via Adaptively Splitting Poisoned Datasetβ46Updated 9 months ago
- [ECCV 2024] Official PyTorch Implementation of "How Many Unicorns Are in This Image? A Safety Evaluation Benchmark for Vision LLMs"β76Updated last year
- One Prompt Word is Enough to Boost Adversarial Robustness for Pre-trained Vision-Language Modelsβ42Updated 3 weeks ago
- Official implementation of NeurIPS'24 paper "Defensive Unlearning with Adversarial Training for Robust Concept Erasure in Diffusion Modelβ¦β35Updated 2 months ago
- Divide-and-Conquer Attack: Harnessing the Power of LLM to Bypass the Censorship of Text-to-Image Generation Modeβ18Updated 4 months ago
- Code for Neurips 2024 paper "Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models"β36Updated this week
- β10Updated 3 months ago
- β23Updated last year
- [ECCV-2024] Transferable Targeted Adversarial Attack, CLIP models, Generative adversarial network, Multi-target attacksβ26Updated 5 months ago
- A package that achieves 95%+ transfer attack success rate against GPT-4β17Updated 2 months ago
- Set-level Guidance Attack: Boosting Adversarial Transferability of Vision-Language Pre-training Models. [ICCV 2023 Oral]β50Updated last year
- β26Updated last month
- β40Updated last year