Hazelsuko07 / EMA
☆13Updated 3 years ago
Related projects: ⓘ
- Methods for removing learned data from neural nets and evaluation of those methods☆32Updated 3 years ago
- ☆31Updated 2 weeks ago
- Code for the CSF 2018 paper "Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting"☆38Updated 5 years ago
- ☆24Updated 2 years ago
- ☆19Updated 2 years ago
- The source code for ICML2021 paper When Does Data Augmentation Help With Membership Inference Attacks?☆8Updated 3 years ago
- ☆10Updated 2 years ago
- ☆23Updated 2 years ago
- Official implementation of "RelaxLoss: Defending Membership Inference Attacks without Losing Utility" (ICLR 2022)☆45Updated 2 years ago
- R-GAP: Recursive Gradient Attack on Privacy [Accepted at ICLR 2021]☆33Updated last year
- ☆17Updated last year
- [ICML 2023] Are Diffusion Models Vulnerable to Membership Inference Attacks?☆29Updated 2 weeks ago
- Camouflage poisoning via machine unlearning☆14Updated last year
- Unofficial pytorch implementation of paper: Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures☆24Updated 11 months ago
- ☆21Updated 3 years ago
- ☆14Updated 8 months ago
- [CCS 2021] "DataLens: Scalable Privacy Preserving Training via Gradient Compression and Aggregation" by Boxin Wang*, Fan Wu*, Yunhui Long…☆37Updated 2 years ago
- ☆45Updated 4 years ago
- Code for the paper: Label-Only Membership Inference Attacks☆61Updated 3 years ago
- ☆12Updated 3 years ago
- ☆11Updated 2 years ago
- ☆21Updated 2 years ago
- ☆20Updated 2 years ago
- Official implementation of "When Machine Unlearning Jeopardizes Privacy" (ACM CCS 2021)☆45Updated 2 years ago
- Learning rate adaptation for differentially private stochastic gradient descent☆16Updated 3 years ago
- Code for Auditing Data Provenance in Text-Generation Models (in KDD 2019)☆9Updated 5 years ago
- ☆26Updated last year
- Reconstructive Neuron Pruning for Backdoor Defense (ICML 2023)☆25Updated 8 months ago
- Code for Auditing DPSGD☆30Updated 2 years ago
- ☆52Updated 4 years ago