Machine-Learning-Security-Lab / mia_prune
Membership Inference Attacks and Defenses in Neural Network Pruning
☆28Updated 2 years ago
Alternatives and similar repositories for mia_prune:
Users that are interested in mia_prune are comparing it to the libraries listed below
- Official implementation of "RelaxLoss: Defending Membership Inference Attacks without Losing Utility" (ICLR 2022)☆49Updated 2 years ago
- Code for the paper: Label-Only Membership Inference Attacks☆65Updated 3 years ago
- ☆19Updated 2 years ago
- Official implementation of "When Machine Unlearning Jeopardizes Privacy" (ACM CCS 2021)☆47Updated 2 years ago
- ☆31Updated 8 months ago
- ☆19Updated 7 months ago
- The code is for our NeurIPS 2019 paper: https://arxiv.org/abs/1910.04749☆34Updated 5 years ago
- [USENIX Security 2022] Mitigating Membership Inference Attacks by Self-Distillation Through a Novel Ensemble Architecture☆16Updated 2 years ago
- Code for the CSF 2018 paper "Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting"☆37Updated 6 years ago
- Example of the attack described in the paper "Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization"☆21Updated 5 years ago
- ☆69Updated 2 years ago
- [CCS 2021] "DataLens: Scalable Privacy Preserving Training via Gradient Compression and Aggregation" by Boxin Wang*, Fan Wu*, Yunhui Long…