yonsei-sslab / MIA
🔒 Implementation of Shokri et al(2016) "Membership Inference Attacks against Machine Learning Models"
☆27Updated 2 years ago
Related projects: ⓘ
- Implementation of the paper : "Membership Inference Attacks Against Machine Learning Models", Shokri et al.☆47Updated 5 years ago
- Official implementation of "RelaxLoss: Defending Membership Inference Attacks without Losing Utility" (ICLR 2022)☆45Updated 2 years ago
- Official implementation of "When Machine Unlearning Jeopardizes Privacy" (ACM CCS 2021)☆45Updated 2 years ago
- Membership Inference Attacks and Defenses in Neural Network Pruning☆25Updated 2 years ago
- code release for "Unrolling SGD: Understanding Factors Influencing Machine Unlearning" published at EuroS&P'22☆22Updated 2 years ago
- [USENIX Security 2022] Mitigating Membership Inference Attacks by Self-Distillation Through a Novel Ensemble Architecture☆15Updated 2 years ago
- Membership Inference, Attribute Inference and Model Inversion attacks implemented using PyTorch.☆54Updated last year
- [NeurIPS23 (Spotlight)] "Model Sparsity Can Simplify Machine Unlearning" by Jinghan Jia*, Jiancheng Liu*, Parikshit Ram, Yuguang Yao, Gao…☆62Updated 6 months ago
- ☆36Updated last month
- Code for the paper: Label-Only Membership Inference Attacks☆61Updated 3 years ago
- Code related to the paper "Machine Unlearning of Features and Labels"☆66Updated 7 months ago
- ☆17Updated last year
- Official codes for "Understanding Deep Gradient Leakage via Inversion Influence Functions", NeurIPS 2023☆14Updated 11 months ago
- Code for ML Doctor☆84Updated last month
- ☆52Updated 4 years ago
- ☆63Updated 2 years ago
- ☆45Updated 4 years ago
- This repo implements several algorithms for learning with differential privacy.☆100Updated last year
- Official implementation of "Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective"☆52Updated last year
- ☆23Updated 3 years ago
- A pytorch implementation of the paper "Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage".☆55Updated last year
- Code for AAAI 2021 Paper "Membership Privacy for Machine Learning Models Through Knowledge Transfer"☆11Updated 3 years ago
- ☆23Updated 2 years ago
- ☆31Updated 2 weeks ago
- ICML 2022 code for "Neurotoxin: Durable Backdoors in Federated Learning" https://arxiv.org/abs/2206.10341☆61Updated last year
- The source code for ICML2021 paper When Does Data Augmentation Help With Membership Inference Attacks?☆8Updated 3 years ago
- Privacy attacks on Split Learning☆37Updated 2 years ago
- Camouflage poisoning via machine unlearning☆14Updated last year
- Official implementation of "GAN-Leaks: A Taxonomy of Membership Inference Attacks against Generative Models" (CCS 2020)☆46Updated 2 years ago
- ☆11Updated last year