Pytorch implementation of backdoor unlearning.
☆21Jun 8, 2022Updated 3 years ago
Alternatives and similar repositories for Pytorch-Backdoor-Unlearning
Users that are interested in Pytorch-Backdoor-Unlearning are comparing it to the libraries listed below
Sorting:
- Official PyTorch Implementation for Continual Learning and Private Unlearning☆18Jul 19, 2022Updated 3 years ago
- Code Implementation for Traceback of Data Poisoning Attacks in Neural Networks☆21Aug 15, 2022Updated 3 years ago
- ☆22Mar 20, 2023Updated 3 years ago
- Reconstructive Neuron Pruning for Backdoor Defense (ICML 2023)☆39Dec 24, 2023Updated 2 years ago
- [NeurIPS24] "What makes unlearning hard and what to do about it" [NeurIPS24] "Scalability of memorization-based machine unlearning"☆21May 24, 2025Updated 9 months ago
- ☆14Feb 26, 2025Updated last year
- ☆31Oct 7, 2021Updated 4 years ago
- Repository that contains the code for the paper titled, 'Unifying Distillation with Personalization in Federated Learning'.☆13May 31, 2021Updated 4 years ago
- ☆53Aug 17, 2024Updated last year
- Code related to the paper "Machine Unlearning of Features and Labels"☆71Feb 13, 2024Updated 2 years ago
- ☆11May 17, 2021Updated 4 years ago
- This repository contains Python code for the paper "Learn What You Want to Unlearn: Unlearning Inversion Attacks against Machine Unlearni…☆20Apr 3, 2024Updated last year
- ☆18Jun 10, 2024Updated last year
- [Preprint] Backdoor Attacks on Federated Learning with Lottery Ticket Hypothesis☆10Sep 23, 2021Updated 4 years ago
- Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''☆53Nov 16, 2022Updated 3 years ago
- The code is for our NeurIPS 2019 paper: https://arxiv.org/abs/1910.04749☆34Mar 28, 2020Updated 5 years ago
- Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks (ICLR '20)☆33Nov 4, 2020Updated 5 years ago
- Official Code Implementation for the CCS 2022 Paper "On the Privacy Risks of Cell-Based NAS Architectures"☆11Nov 21, 2022Updated 3 years ago
- ☆27Feb 1, 2023Updated 3 years ago
- [ICLR 2023, Best Paper Award at ECCV’22 AROW Workshop] FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning☆60Dec 11, 2024Updated last year
- Code for the WWW21 paper "Not All Features Are Equal: Discovering Essential Features for Preserving Prediction Privacy"☆12Feb 15, 2021Updated 5 years ago
- A modular evaluation metrics and a benchmark for large-scale federated learning☆12Jul 25, 2024Updated last year
- Official repo for NeurIPS'24 paper "WAGLE: Strategic Weight Attribution for Effective and Modular Unlearning in Large Language Models"☆19Dec 16, 2024Updated last year
- SPATL: Salient Prameter Aggregation and Transfer Learning for Heterogeneous Federated Learning☆23Nov 17, 2022Updated 3 years ago
- Poisoning Deep Learning based Recommender Model in Federated Learning Scenarios☆19Apr 27, 2022Updated 3 years ago
- [Preprint] On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping☆10Feb 27, 2020Updated 6 years ago
- verifying machine unlearning by backdooring☆20Mar 25, 2023Updated 2 years ago
- RAB: Provable Robustness Against Backdoor Attacks☆39Oct 3, 2023Updated 2 years ago
- [AAAI 2024] DataElixir: Purifying Poisoned Dataset to Mitigate Backdoor Attacks via Diffusion Models☆12Dec 5, 2024Updated last year
- Codes for Dual Stealthy Backdoor☆14Feb 10, 2024Updated 2 years ago
- ☆17Jun 25, 2024Updated last year
- ☆20Oct 28, 2025Updated 4 months ago
- ConvexPolytopePosioning☆37Jan 10, 2020Updated 6 years ago
- Code for "Neural Tangent Generalization Attacks" (ICML 2021)☆41Jul 29, 2021Updated 4 years ago
- Byzantine-resilient distributed SGD with TensorFlow.☆40Jan 22, 2021Updated 5 years ago
- 🔥🔥🔥 Detecting hidden backdoors in Large Language Models with only black-box access☆53Jun 2, 2025Updated 9 months ago
- ☆26Dec 1, 2022Updated 3 years ago
- Research prototype of deletion efficient k-means algorithms☆24Dec 19, 2019Updated 6 years ago
- ☆15Apr 7, 2023Updated 2 years ago