lmgraves / AmnesiacML
Methods for removing learned data from neural nets and evaluation of those methods
☆32Updated 3 years ago
Related projects: ⓘ
- ☆13Updated 3 years ago
- Official implementation of "RelaxLoss: Defending Membership Inference Attacks without Losing Utility" (ICLR 2022)☆45Updated 2 years ago
- ☆52Updated 4 years ago
- Code related to the paper "Machine Unlearning of Features and Labels"☆66Updated 7 months ago
- [NeurIPS23 (Spotlight)] "Model Sparsity Can Simplify Machine Unlearning" by Jinghan Jia*, Jiancheng Liu*, Parikshit Ram, Yuguang Yao, Gao…☆62Updated 6 months ago
- [ICML 2023] Are Diffusion Models Vulnerable to Membership Inference Attacks?☆29Updated 2 weeks ago
- Official implementation of "When Machine Unlearning Jeopardizes Privacy" (ACM CCS 2021)☆45Updated 2 years ago
- ☆36Updated last month
- Camouflage poisoning via machine unlearning☆14Updated last year
- ☆23Updated 2 years ago
- Code for Backdoor Attacks Against Dataset Distillation☆29Updated last year
- Certified Removal from Machine Learning Models☆62Updated 3 years ago
- Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''☆49Updated last year
- This is the repository that introduces research topics related to protecting intellectual property (IP) of AI from a data-centric perspec…☆21Updated 10 months ago
- Code for the paper: Label-Only Membership Inference Attacks☆61Updated 3 years ago
- Reconstructive Neuron Pruning for Backdoor Defense (ICML 2023)☆25Updated 8 months ago
- Towards Stable Backdoor Purification through Feature Shift Tuning (NeurIPS 2023)☆22Updated last month
- code release for "Unrolling SGD: Understanding Factors Influencing Machine Unlearning" published at EuroS&P'22☆22Updated 2 years ago
- ☆63Updated 2 years ago
- [ICLR2023] Distilling Cognitive Backdoor Patterns within an Image☆30Updated 2 months ago
- ☆23Updated 3 years ago
- Codes for NeurIPS 2021 paper "Adversarial Neuron Pruning Purifies Backdoored Deep Models"☆52Updated last year
- The code of AAAI-21 paper titled "Defending against Backdoors in Federated Learning with Robust Learning Rate".☆30Updated last year
- ☆19Updated 2 years ago
- Membership Inference, Attribute Inference and Model Inversion attacks implemented using PyTorch.☆54Updated last year
- [ICML 2023] Official code implementation of "Chameleon: Adapting to Peer Images for Planting Durable Backdoors in Federated Learning (htt…☆31Updated 9 months ago
- Code for ML Doctor☆84Updated last month
- This repo implements several algorithms for learning with differential privacy.☆100Updated last year
- ☆23Updated last year
- Bilateral Dependency Optimization: Defending Against Model-inversion Attacks☆21Updated 8 months ago