VITA-Group / Shake-to-LeakLinks
☆14Updated 2 months ago
Alternatives and similar repositories for Shake-to-Leak
Users that are interested in Shake-to-Leak are comparing it to the libraries listed below
Sorting:
- ☆20Updated last year
- Official implementation of NeurIPS'24 paper "Defensive Unlearning with Adversarial Training for Robust Concept Erasure in Diffusion Model…☆41Updated 7 months ago
- ☆31Updated 4 months ago
- Certified robustness "for free" using off-the-shelf diffusion models and classifiers☆41Updated 2 years ago
- AnyDoor: Test-Time Backdoor Attacks on Multimodal Large Language Models☆54Updated last year
- [CVPR'24 Oral] Metacloak: Preventing Unauthorized Subject-driven Text-to-image Diffusion-based Synthesis via Meta-learning☆25Updated 6 months ago
- ☆53Updated 2 years ago
- Code for the paper "Autoregressive Perturbations for Data Poisoning" (NeurIPS 2022)☆20Updated 8 months ago
- Code for Neurips 2024 paper "Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models"☆49Updated 4 months ago
- PDM-based Purifier☆20Updated 7 months ago
- Backdoor Safety Tuning (NeurIPS 2023 & 2024 Spotlight)☆26Updated 6 months ago
- [NeurIPS 2023] Differentially Private Image Classification by Learning Priors from Random Processes☆12Updated last year
- [CVPR 2024] This repository includes the official implementation our paper "Revisiting Adversarial Training at Scale"☆19Updated last year
- [ICML 2023] Are Diffusion Models Vulnerable to Membership Inference Attacks?☆36Updated 9 months ago
- ☆62Updated 8 months ago
- ☆11Updated last year
- ☆20Updated 5 months ago
- This is the repository that introduces research topics related to protecting intellectual property (IP) of AI from a data-centric perspec…☆22Updated last year
- ☆41Updated last year
- The official implementation of ECCV'24 paper "To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Uns…☆76Updated 3 months ago
- Official repo for An Efficient Membership Inference Attack for the Diffusion Model by Proximal Initialization☆14Updated last year
- [NeurIPS 2021] “When does Contrastive Learning Preserve Adversarial Robustness from Pretraining to Finetuning?”☆48Updated 3 years ago
- [CVPR2025] Official Repository for IMMUNE: Improving Safety Against Jailbreaks in Multi-modal LLMs via Inference-Time Alignment☆16Updated 2 months ago
- [CVPR23W] "A Pilot Study of Query-Free Adversarial Attack against Stable Diffusion" by Haomin Zhuang, Yihua Zhang and Sijia Liu☆26Updated 9 months ago
- Code of paper [CVPR'24: Can Protective Perturbation Safeguard Personal Data from Being Exploited by Stable Diffusion?]☆17Updated last year
- [ICLR 2022 official code] Robust Learning Meets Generative Models: Can Proxy Distributions Improve Adversarial Robustness?☆29Updated 3 years ago
- [ECCV 2024] Official PyTorch Implementation of "How Many Unicorns Are in This Image? A Safety Evaluation Benchmark for Vision LLMs"☆80Updated last year
- SEAT☆20Updated last year
- [ICLR'21] Dataset Inference for Ownership Resolution in Machine Learning☆32Updated 2 years ago
- Pytorch implementation for the pilot study on the robustness of latent diffusion models.☆11Updated last year