Pytorch implementation of backdoor unlearning.
☆21Jun 8, 2022Updated 3 years ago
Alternatives and similar repositories for Pytorch-Backdoor-Unlearning
Users that are interested in Pytorch-Backdoor-Unlearning are comparing it to the libraries listed below
Sorting:
- ☆31Oct 7, 2021Updated 4 years ago
- Code for "On the Trade-off between Adversarial and Backdoor Robustness" (NIPS 2020)☆17Nov 11, 2020Updated 5 years ago
- ☆22Mar 20, 2023Updated 2 years ago
- Code Implementation for Traceback of Data Poisoning Attacks in Neural Networks☆20Aug 15, 2022Updated 3 years ago
- BrainWash: A Poisoning Attack to Forget in Continual Learning☆12Apr 15, 2024Updated last year
- Official Code Implementation for the CCS 2022 Paper "On the Privacy Risks of Cell-Based NAS Architectures"☆11Nov 21, 2022Updated 3 years ago
- Repository that contains the code for the paper titled, 'Unifying Distillation with Personalization in Federated Learning'.☆13May 31, 2021Updated 4 years ago
- [Preprint] Backdoor Attacks on Federated Learning with Lottery Ticket Hypothesis☆10Sep 23, 2021Updated 4 years ago
- [Preprint] On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping☆10Feb 27, 2020Updated 6 years ago
- Code for the WWW21 paper "Not All Features Are Equal: Discovering Essential Features for Preserving Prediction Privacy"☆12Feb 15, 2021Updated 5 years ago
- [NeurIPS24] "What makes unlearning hard and what to do about it" [NeurIPS24] "Scalability of memorization-based machine unlearning"☆21May 24, 2025Updated 9 months ago
- The code is for our NeurIPS 2019 paper: https://arxiv.org/abs/1910.04749☆34Mar 28, 2020Updated 5 years ago
- ☆20Oct 28, 2025Updated 4 months ago
- ☆17Jun 10, 2024Updated last year
- A modular evaluation metrics and a benchmark for large-scale federated learning☆12Jul 25, 2024Updated last year
- ☆18Jun 15, 2021Updated 4 years ago
- This repository contains Python code for the paper "Learn What You Want to Unlearn: Unlearning Inversion Attacks against Machine Unlearni…☆19Apr 3, 2024Updated last year
- verifying machine unlearning by backdooring☆20Mar 25, 2023Updated 2 years ago
- ConvexPolytopePosioning☆37Jan 10, 2020Updated 6 years ago
- RAB: Provable Robustness Against Backdoor Attacks☆39Oct 3, 2023Updated 2 years ago
- Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks (ICLR '20)☆33Nov 4, 2020Updated 5 years ago
- The code for our Updates-Leak paper☆17Jul 23, 2020Updated 5 years ago
- Poisoning Deep Learning based Recommender Model in Federated Learning Scenarios☆18Apr 27, 2022Updated 3 years ago
- 🔥🔥🔥 Detecting hidden backdoors in Large Language Models with only black-box access☆52Jun 2, 2025Updated 8 months ago
- ☆27Feb 1, 2023Updated 3 years ago
- ☆27Oct 17, 2022Updated 3 years ago
- Official code for the paper "Membership Inference Attacks Against Recommender Systems" (ACM CCS 2021)☆20Oct 8, 2024Updated last year
- ☆83Aug 3, 2021Updated 4 years ago
- ☆45Nov 10, 2019Updated 6 years ago
- Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''☆53Nov 16, 2022Updated 3 years ago
- ☆26Dec 1, 2022Updated 3 years ago
- ☆27Dec 15, 2022Updated 3 years ago
- ☆27Oct 16, 2022Updated 3 years ago
- ☆28Aug 21, 2023Updated 2 years ago
- [AAAI 2024] MELO: Enhancing Model Editing with Neuron-indexed Dynamic LoRA☆27Apr 9, 2024Updated last year
- The code and data for "Are Large Pre-Trained Language Models Leaking Your Personal Information?" (Findings of EMNLP '22)☆28Oct 31, 2022Updated 3 years ago
- ☆199Sep 22, 2023Updated 2 years ago
- Code for Paper "Private federated learning on vertically partitioned data via entity resolution and additively homomorphic encryption"☆34Nov 17, 2022Updated 3 years ago
- Code repository for the paper --- [USENIX Security 2023] Towards A Proactive ML Approach for Detecting Backdoor Poison Samples☆30Jul 11, 2023Updated 2 years ago