VITA-Group / Shake-to-Leak
☆14Updated last month
Alternatives and similar repositories for Shake-to-Leak:
Users that are interested in Shake-to-Leak are comparing it to the libraries listed below
- Official implementation of NeurIPS'24 paper "Defensive Unlearning with Adversarial Training for Robust Concept Erasure in Diffusion Model…☆39Updated 6 months ago
- ☆20Updated last year
- ☆29Updated 3 months ago
- [ICLR 2022 official code] Robust Learning Meets Generative Models: Can Proxy Distributions Improve Adversarial Robustness?☆29Updated 3 years ago
- ☆62Updated 7 months ago
- Code for Neurips 2024 paper "Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models"☆46Updated 3 months ago
- ☆32Updated 9 months ago
- ☆41Updated last year
- The official implementation of ECCV'24 paper "To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Uns…☆73Updated 2 months ago
- Certified robustness "for free" using off-the-shelf diffusion models and classifiers☆40Updated last year
- One Prompt Word is Enough to Boost Adversarial Robustness for Pre-trained Vision-Language Models☆47Updated 4 months ago
- ☆59Updated 2 years ago
- ☆53Updated last year
- ECCV2024: Adversarial Prompt Tuning for Vision-Language Models☆25Updated 5 months ago
- [CVPR'24 Oral] Metacloak: Preventing Unauthorized Subject-driven Text-to-image Diffusion-based Synthesis via Meta-learning☆23Updated 5 months ago
- [CVPR 2024] This repository includes the official implementation our paper "Revisiting Adversarial Training at Scale"☆19Updated last year
- Official PyTorch implementation of "CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive Learning" @ ICCV 2023☆35Updated last year
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking …☆28Updated 6 months ago
- ☆13Updated 2 years ago
- Official repo for An Efficient Membership Inference Attack for the Diffusion Model by Proximal Initialization☆13Updated last year
- ☆18Updated last year
- [ICML 2024] Prompting4Debugging: Red-Teaming Text-to-Image Diffusion Models by Finding Problematic Prompts (Official Pytorch Implementati…☆42Updated 5 months ago
- Backdoor Safety Tuning (NeurIPS 2023 & 2024 Spotlight)☆25Updated 5 months ago
- [NeurIPS 2024 D&B Track] UnlearnCanvas: A Stylized Image Dataset to Benchmark Machine Unlearning for Diffusion Models by Yihua Zhang, Cho…☆68Updated 6 months ago
- AnyDoor: Test-Time Backdoor Attacks on Multimodal Large Language Models☆53Updated last year
- [ICML 2023] Are Diffusion Models Vulnerable to Membership Inference Attacks?☆34Updated 8 months ago
- ☆42Updated 5 months ago
- [CVPR23W] "A Pilot Study of Query-Free Adversarial Attack against Stable Diffusion" by Haomin Zhuang, Yihua Zhang and Sijia Liu☆26Updated 8 months ago
- ☆27Updated 2 weeks ago
- [ECCV 2024] Official PyTorch Implementation of "How Many Unicorns Are in This Image? A Safety Evaluation Benchmark for Vision LLMs"☆80Updated last year