vikram2000b / Fast-Machine-Unlearning
☆26Updated 2 years ago
Related projects ⓘ
Alternatives and complementary repositories for Fast-Machine-Unlearning
- Official repo of the paper Zero-Shot Machine Unlearning accepted in IEEE Transactions on Information Forensics and Security☆33Updated last year
- ☆38Updated 3 months ago
- code release for "Unrolling SGD: Understanding Factors Influencing Machine Unlearning" published at EuroS&P'22☆22Updated 2 years ago
- Official repo of the paper Can Bad Teaching Induce Forgetting? Unlearning in Deep Networks using an Incompetent Teacher accepted in AAAI …☆30Updated last year
- Official implementation of "RelaxLoss: Defending Membership Inference Attacks without Losing Utility" (ICLR 2022)☆46Updated 2 years ago
- [NeurIPS23 (Spotlight)] "Model Sparsity Can Simplify Machine Unlearning" by Jinghan Jia*, Jiancheng Liu*, Parikshit Ram, Yuguang Yao, Gao…☆63Updated 8 months ago
- ☆53Updated 4 years ago
- Code related to the paper "Machine Unlearning of Features and Labels"☆68Updated 9 months ago
- ☆11Updated last year
- Methods for removing learned data from neural nets and evaluation of those methods☆33Updated 3 years ago
- [ECCV24] "Challenging Forgets: Unveiling the Worst-Case Forget Sets in Machine Unlearning" by Chongyu Fan*, Jiancheng Liu*, Alfred Hero, …☆17Updated last month
- Code for Backdoor Attacks Against Dataset Distillation☆30Updated last year
- Camouflage poisoning via machine unlearning☆15Updated last year
- Reconstructive Neuron Pruning for Backdoor Defense (ICML 2023)☆28Updated 10 months ago
- Certified Removal from Machine Learning Models☆63Updated 3 years ago
- Codes for NeurIPS 2021 paper "Adversarial Neuron Pruning Purifies Backdoored Deep Models"☆55Updated last year
- Official implementation of "When Machine Unlearning Jeopardizes Privacy" (ACM CCS 2021)☆46Updated 2 years ago
- An Empirical Study of Federated Unlearning: Efficiency and Effectiveness (Accepted Conference Track Papers at ACML 2023)☆15Updated last year
- ☆23Updated 2 years ago
- [NeurIPS 2021] Source code for the paper "Qu-ANTI-zation: Exploiting Neural Network Quantization for Achieving Adversarial Outcomes"☆13Updated 3 years ago
- Backdoor Safety Tuning (NeurIPS 2023 & 2024 Spotlight)☆24Updated this week
- A pytorch implementation of the paper "Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage".☆57Updated 2 years ago
- Awesome Federated Unlearning (FU) Papers (Continually Update)☆81Updated 6 months ago
- Code for CVPR22 paper "Deep Unlearning via Randomized Conditionally Independent Hessians"☆25Updated 2 years ago
- Membership Inference Attacks and Defenses in Neural Network Pruning☆28Updated 2 years ago
- Code for AAAI 2021 Paper "Membership Privacy for Machine Learning Models Through Knowledge Transfer"☆11Updated 3 years ago
- [ICML 2023] Are Diffusion Models Vulnerable to Membership Inference Attacks?☆31Updated 2 months ago
- Code for "Label-Consistent Backdoor Attacks"☆49Updated 4 years ago
- Adversarial attacks and defenses against federated learning.☆15Updated last year
- ☆65Updated 2 years ago