kong13661 / PIALinks
Official repo for An Efficient Membership Inference Attack for the Diffusion Model by Proximal Initialization
☆15Updated last year
Alternatives and similar repositories for PIA
Users that are interested in PIA are comparing it to the libraries listed below
Sorting:
- [ICML 2023] Are Diffusion Models Vulnerable to Membership Inference Attacks?☆38Updated 10 months ago
- Official repo to reproduce the paper "How to Backdoor Diffusion Models?" published at CVPR 2023☆92Updated 2 months ago
- Code Repo for the NeurIPS 2023 paper "VillanDiffusion: A Unified Backdoor Attack Framework for Diffusion Models"☆25Updated 2 months ago
- Implementation of BadCLIP https://arxiv.org/pdf/2311.16194.pdf☆21Updated last year
- ☆13Updated last year
- Github repo for One-shot Neural Backdoor Erasing via Adversarial Weight Masking (NeurIPS 2022)☆15Updated 2 years ago
- ☆60Updated last year
- ☆58Updated 2 years ago
- ☆20Updated 10 months ago
- All code and data necessary to replicate experiments in the paper BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Model…☆11Updated 10 months ago
- This is an official repository for Practical Membership Inference Attacks Against Large-Scale Multi-Modal Models: A Pilot Study (ICCV2023…☆23Updated last year
- [MM '24] EvilEdit: Backdooring Text-to-Image Diffusion Models in One Second☆22Updated 8 months ago
- [MM'23 Oral] "Text-to-image diffusion models can be easily backdoored through multimodal data poisoning"☆29Updated 4 months ago
- [CVPR 2023] The official implementation of our CVPR 2023 paper "Detecting Backdoors During the Inference Stage Based on Corruption Robust…☆23Updated 2 years ago
- Official implementation of NeurIPS'24 paper "Defensive Unlearning with Adversarial Training for Robust Concept Erasure in Diffusion Model…☆42Updated 8 months ago
- ☆20Updated last year
- An Embarrassingly Simple Backdoor Attack on Self-supervised Learning☆16Updated last year
- ☆32Updated 6 months ago
- Code of paper [CVPR'24: Can Protective Perturbation Safeguard Personal Data from Being Exploited by Stable Diffusion?]☆21Updated last year
- [CVPR 2023] Backdoor Defense via Adaptively Splitting Poisoned Dataset☆48Updated last year
- ☆28Updated 11 months ago
- Code for the paper "Better Diffusion Models Further Improve Adversarial Training" (ICML 2023)☆141Updated last year
- Code for the paper "Autoregressive Perturbations for Data Poisoning" (NeurIPS 2022)☆20Updated 10 months ago
- ☆51Updated 3 years ago
- The official implementation of ECCV'24 paper "To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Uns…☆78Updated 4 months ago
- APBench: A Unified Availability Poisoning Attack and Defenses Benchmark (TMLR 08/2024)☆30Updated 3 months ago
- Source code for ECCV 2022 Poster: Data-free Backdoor Removal based on Channel Lipschitzness☆33Updated 2 years ago
- [ICLR2023] Distilling Cognitive Backdoor Patterns within an Image☆36Updated 8 months ago
- Unlearnable Examples Give a False Sense of Security: Piercing through Unexploitable Data with Learnable Examples☆10Updated 9 months ago
- [NeurIPS 2023] Codes for DiffAttack: Evasion Attacks Against Diffusion-Based Adversarial Purification☆32Updated last year