Pytorch implementation of backdoor unlearning.
☆21Jun 8, 2022Updated 3 years ago
Alternatives and similar repositories for Pytorch-Backdoor-Unlearning
Users that are interested in Pytorch-Backdoor-Unlearning are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Official PyTorch Implementation for Continual Learning and Private Unlearning☆19Jul 19, 2022Updated 3 years ago
- BrainWash: A Poisoning Attack to Forget in Continual Learning☆12Apr 15, 2024Updated 2 years ago
- Code Implementation for Traceback of Data Poisoning Attacks in Neural Networks☆21Aug 15, 2022Updated 3 years ago
- ☆22Mar 20, 2023Updated 3 years ago
- Reconstructive Neuron Pruning for Backdoor Defense (ICML 2023)☆39Dec 24, 2023Updated 2 years ago
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- [NeurIPS24] "What makes unlearning hard and what to do about it" [NeurIPS24] "Scalability of memorization-based machine unlearning"☆22May 24, 2025Updated 11 months ago
- ☆14Feb 26, 2025Updated last year
- ☆31Oct 7, 2021Updated 4 years ago
- Repository that contains the code for the paper titled, 'Unifying Distillation with Personalization in Federated Learning'.☆13May 31, 2021Updated 4 years ago
- Code related to the paper "Machine Unlearning of Features and Labels"☆71Feb 13, 2024Updated 2 years ago
- ☆18Jun 10, 2024Updated last year
- This repository contains Python code for the paper "Learn What You Want to Unlearn: Unlearning Inversion Attacks against Machine Unlearni…☆20Apr 3, 2024Updated 2 years ago
- [Preprint] Backdoor Attacks on Federated Learning with Lottery Ticket Hypothesis☆10Sep 23, 2021Updated 4 years ago
- ☆13May 17, 2024Updated last year
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''☆53Nov 16, 2022Updated 3 years ago
- The code is for our NeurIPS 2019 paper: https://arxiv.org/abs/1910.04749☆34Mar 28, 2020Updated 6 years ago
- Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks (ICLR '20)☆33Nov 4, 2020Updated 5 years ago
- Official Code Implementation for the CCS 2022 Paper "On the Privacy Risks of Cell-Based NAS Architectures"☆11Nov 21, 2022Updated 3 years ago
- ☆18Jun 15, 2021Updated 4 years ago
- ☆28Feb 1, 2023Updated 3 years ago
- [ICLR 2023, Best Paper Award at ECCV’22 AROW Workshop] FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning☆60Dec 11, 2024Updated last year
- Code for the WWW21 paper "Not All Features Are Equal: Discovering Essential Features for Preserving Prediction Privacy"☆12Feb 15, 2021Updated 5 years ago
- A modular evaluation metrics and a benchmark for large-scale federated learning☆12Jul 25, 2024Updated last year
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Official repo for NeurIPS'24 paper "WAGLE: Strategic Weight Attribution for Effective and Modular Unlearning in Large Language Models"☆19Dec 16, 2024Updated last year
- SPATL: Salient Prameter Aggregation and Transfer Learning for Heterogeneous Federated Learning☆24Nov 17, 2022Updated 3 years ago
- Poisoning Deep Learning based Recommender Model in Federated Learning Scenarios☆19Apr 27, 2022Updated 4 years ago
- [Preprint] On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping☆10Feb 27, 2020Updated 6 years ago
- verifying machine unlearning by backdooring☆20Mar 25, 2023Updated 3 years ago
- RAB: Provable Robustness Against Backdoor Attacks☆39Oct 3, 2023Updated 2 years ago
- Codes for Dual Stealthy Backdoor☆14Feb 10, 2024Updated 2 years ago
- [AAAI 2024] DataElixir: Purifying Poisoned Dataset to Mitigate Backdoor Attacks via Diffusion Models☆12Dec 5, 2024Updated last year
- ☆17Apr 22, 2026Updated last week
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- ☆20Oct 28, 2025Updated 6 months ago
- ConvexPolytopePosioning☆37Jan 10, 2020Updated 6 years ago
- Byzantine-resilient distributed SGD with TensorFlow.☆40Jan 22, 2021Updated 5 years ago
- Code for "Neural Tangent Generalization Attacks" (ICML 2021)☆41Jul 29, 2021Updated 4 years ago
- 🔥🔥🔥 Detecting hidden backdoors in Large Language Models with only black-box access☆55Jun 2, 2025Updated 10 months ago
- The code of AAAI-21 paper titled "Defending against Backdoors in Federated Learning with Robust Learning Rate".☆38Oct 3, 2022Updated 3 years ago
- Research prototype of deletion efficient k-means algorithms☆24Dec 19, 2019Updated 6 years ago