Official repo for An Efficient Membership Inference Attack for the Diffusion Model by Proximal Initialization
☆16Mar 8, 2024Updated last year
Alternatives and similar repositories for PIA
Users that are interested in PIA are comparing it to the libraries listed below
Sorting:
- [ICML 2023] Are Diffusion Models Vulnerable to Membership Inference Attacks?☆43Sep 4, 2024Updated last year
- [NeurIPS 2024] "Membership Inference on Text-to-image Diffusion Models via Conditional Likelihood Discrepancy"☆12Sep 15, 2025Updated 5 months ago
- This is an official repository for Practical Membership Inference Attacks Against Large-Scale Multi-Modal Models: A Pilot Study (ICCV2023…☆24Sep 29, 2023Updated 2 years ago
- ☆19Feb 22, 2023Updated 3 years ago
- ☆15Apr 4, 2024Updated last year
- ☆10Mar 20, 2023Updated 2 years ago
- Code for the paper "Overconfidence is a Dangerous Thing: Mitigating Membership Inference Attacks by Enforcing Less Confident Prediction" …☆12Sep 6, 2023Updated 2 years ago
- [BMVC 2023] Semantic Adversarial Attacks via Diffusion Models☆25Nov 30, 2023Updated 2 years ago
- Gaussian Membership Inference Privacy (NeurIPS 2023)☆12Jul 27, 2024Updated last year
- ☆25Nov 14, 2022Updated 3 years ago
- ☆13May 1, 2024Updated last year
- ☆14May 8, 2024Updated last year
- The code of "dp-promise: Differentially Private Diffusion Probabilistic Models for Image Synthesis"☆23Apr 5, 2024Updated last year
- ☆20Oct 28, 2025Updated 4 months ago
- Public implementation of the paper "On the Importance of Difficulty Calibration in Membership Inference Attacks".☆16Dec 1, 2021Updated 4 years ago
- Code for paper "Fast and Complete: Enabling Complete Neural Network Verification with Rapid and Massively Parallel Incomplete Verifiers"☆17Jan 27, 2023Updated 3 years ago
- Source code and scripts for the paper "Is Difficulty Calibration All We Need? Towards More Practical Membership Inference Attacks"☆20Dec 10, 2024Updated last year
- [MM '24] EvilEdit: Backdooring Text-to-Image Diffusion Models in One Second☆28Nov 19, 2024Updated last year
- ☆19Mar 6, 2023Updated 2 years ago
- ☆21Oct 25, 2023Updated 2 years ago
- Official implementation of "GAN-Leaks: A Taxonomy of Membership Inference Attacks against Generative Models" (CCS 2020)☆47Apr 22, 2022Updated 3 years ago
- Code Repo for the NeurIPS 2023 paper "VillanDiffusion: A Unified Backdoor Attack Framework for Diffusion Models"☆27Sep 18, 2025Updated 5 months ago
- Official implementation of "RelaxLoss: Defending Membership Inference Attacks without Losing Utility" (ICLR 2022)☆48Aug 18, 2022Updated 3 years ago
- Code for the paper - ConceptPrune: Concept Editing in Diffusion Models via Skilled Neuron Pruning☆22Aug 13, 2024Updated last year
- [CVPR'24 Oral] Metacloak: Preventing Unauthorized Subject-driven Text-to-image Diffusion-based Synthesis via Meta-learning☆28Nov 19, 2024Updated last year
- ☆26Jan 11, 2023Updated 3 years ago
- [AAAI'23] Federated Robustness Propagation: Sharing Robustness in Heterogeneous Federated Learning☆28Feb 23, 2023Updated 3 years ago
- This is the source code for MEA-Defender. Our paper is accepted by the IEEE Symposium on Security and Privacy (S&P) 2024.☆29Nov 19, 2023Updated 2 years ago
- Code for the paper: Label-Only Membership Inference Attacks☆68Sep 11, 2021Updated 4 years ago
- ☆25Jan 20, 2019Updated 7 years ago
- β-CROWN: Efficient Bound Propagation with Per-neuron Split Constraints for Neural Network Verification☆31Nov 9, 2021Updated 4 years ago
- Membership Inference Attacks and Defenses in Neural Network Pruning☆28Jul 12, 2022Updated 3 years ago
- ☆28Aug 7, 2024Updated last year
- ☆33Nov 27, 2023Updated 2 years ago
- Official repository of the paper: Marking Code Without Breaking It: Code Watermarking for Detecting LLM-Generated Code (Findings of EACL …☆12Feb 11, 2026Updated 3 weeks ago
- Certified Patch Robustness via Smoothed Vision Transformers☆42Dec 17, 2021Updated 4 years ago
- [ICLR24 (Spotlight)] "SalUn: Empowering Machine Unlearning via Gradient-based Weight Saliency in Both Image Classification and Generation…☆142Updated this week
- [MM'23 Oral] "Text-to-image diffusion models can be easily backdoored through multimodal data poisoning"☆31Aug 14, 2025Updated 6 months ago
- ☆12May 6, 2022Updated 3 years ago