lmgraves / AmnesiacMLLinks
Methods for removing learned data from neural nets and evaluation of those methods
☆37Updated 4 years ago
Alternatives and similar repositories for AmnesiacML
Users that are interested in AmnesiacML are comparing it to the libraries listed below
Sorting:
- Code related to the paper "Machine Unlearning of Features and Labels"☆71Updated last year
- ☆58Updated 5 years ago
- [USENIX Security 2022] Mitigating Membership Inference Attacks by Self-Distillation Through a Novel Ensemble Architecture☆17Updated 3 years ago
- ☆194Updated 2 years ago
- [NeurIPS23 (Spotlight)] "Model Sparsity Can Simplify Machine Unlearning" by Jinghan Jia*, Jiancheng Liu*, Parikshit Ram, Yuguang Yao, Gao…☆81Updated last year
- ☆16Updated 4 years ago
- Awesome Federated Unlearning (FU) Papers (Continually Update)☆98Updated last year
- ☆47Updated last year
- Official implementation of "When Machine Unlearning Jeopardizes Privacy" (ACM CCS 2021)☆49Updated 3 years ago
- [arXiv:2411.10023] "Model Inversion Attacks: A Survey of Approaches and Countermeasures"☆204Updated 4 months ago
- Official implementation of "RelaxLoss: Defending Membership Inference Attacks without Losing Utility" (ICLR 2022)☆48Updated 3 years ago
- Code for Backdoor Attacks Against Dataset Distillation☆35Updated 2 years ago
- This repo implements several algorithms for learning with differential privacy.☆109Updated 2 years ago
- [ICML 2023] Are Diffusion Models Vulnerable to Membership Inference Attacks?☆41Updated last year
- Membership Inference, Attribute Inference and Model Inversion attacks implemented using PyTorch.☆64Updated last year
- Certified Removal from Machine Learning Models☆69Updated 4 years ago
- Anti-Backdoor learning (NeurIPS 2021)☆84Updated 2 years ago
- Code for ML Doctor☆90Updated last year
- [ECCV24] "Challenging Forgets: Unveiling the Worst-Case Forget Sets in Machine Unlearning" by Chongyu Fan*, Jiancheng Liu*, Alfred Hero, …☆23Updated 4 months ago
- Code for the paper: Label-Only Membership Inference Attacks☆66Updated 4 years ago
- Camouflage poisoning via machine unlearning☆17Updated 3 months ago
- ☆70Updated 3 years ago
- A pytorch implementation of the paper "Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage".☆60Updated 2 years ago
- ☆26Updated 3 years ago
- The code of AAAI-21 paper titled "Defending against Backdoors in Federated Learning with Robust Learning Rate".☆34Updated 3 years ago
- ☆19Updated last year
- Codes for NeurIPS 2021 paper "Adversarial Neuron Pruning Purifies Backdoored Deep Models"☆58Updated 2 years ago
- code release for "Unrolling SGD: Understanding Factors Influencing Machine Unlearning" published at EuroS&P'22☆22Updated 3 years ago
- Likelihood Ratio Attack (LiRA) in PyTorch☆15Updated 7 months ago
- [ICLR2023] Distilling Cognitive Backdoor Patterns within an Image☆36Updated last month