NYU-DICE-Lab / circumventing-concept-erasure
☆17Updated last year
Alternatives and similar repositories for circumventing-concept-erasure:
Users that are interested in circumventing-concept-erasure are comparing it to the libraries listed below
- ☆27Updated last month
- Official implementation of NeurIPS'24 paper "Defensive Unlearning with Adversarial Training for Robust Concept Erasure in Diffusion Model…☆39Updated 4 months ago
- ☆13Updated 8 months ago
- ☆60Updated 5 months ago
- [ICLR 2022 official code] Robust Learning Meets Generative Models: Can Proxy Distributions Improve Adversarial Robustness?☆29Updated 2 years ago
- Certified robustness "for free" using off-the-shelf diffusion models and classifiers☆38Updated last year
- ☆18Updated last year
- Implementation of "Adversarial purification with Score-based generative models", ICML 2021☆29Updated 3 years ago
- The official implementation of ECCV'24 paper "To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Uns…☆71Updated 2 weeks ago
- Official repo for An Efficient Membership Inference Attack for the Diffusion Model by Proximal Initialization☆12Updated last year
- ☆31Updated 8 months ago
- ☆53Updated last year
- ☆26Updated 9 months ago
- [CVPR'24 Oral] Metacloak: Preventing Unauthorized Subject-driven Text-to-image Diffusion-based Synthesis via Meta-learning☆21Updated 3 months ago
- ☆58Updated 2 years ago
- ☆53Updated last year
- ☆26Updated 3 months ago
- ☆12Updated 3 months ago
- [ICLR 2023, Spotlight] Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning☆30Updated last year
- ☆25Updated 7 months ago
- ☆10Updated 3 years ago
- PDM-based Purifier☆20Updated 4 months ago
- [NeurIPS 2021] “When does Contrastive Learning Preserve Adversarial Robustness from Pretraining to Finetuning?”☆48Updated 3 years ago
- ☆40Updated last year
- Code for the paper "Autoregressive Perturbations for Data Poisoning" (NeurIPS 2022)☆20Updated 6 months ago
- Code of paper [CVPR'24: Can Protective Perturbation Safeguard Personal Data from Being Exploited by Stable Diffusion?]☆15Updated 11 months ago
- Guided Adversarial Attack for Evaluating and Enhancing Adversarial Defenses, NeurIPS Spotlight 2020☆27Updated 4 years ago
- [NeurIPS 2023] Differentially Private Image Classification by Learning Priors from Random Processes☆12Updated last year
- [ICML 2024] Prompting4Debugging: Red-Teaming Text-to-Image Diffusion Models by Finding Problematic Prompts (Official Pytorch Implementati…☆41Updated 3 months ago
- Dataset Interfaces: Diagnosing Model Failures Using Controllable Counterfactual Generation☆45Updated 2 years ago