Pytorch implementation of backdoor unlearning.
☆21Jun 8, 2022Updated 3 years ago
Alternatives and similar repositories for Pytorch-Backdoor-Unlearning
Users that are interested in Pytorch-Backdoor-Unlearning are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Official PyTorch Implementation for Continual Learning and Private Unlearning☆18Jul 19, 2022Updated 3 years ago
- BrainWash: A Poisoning Attack to Forget in Continual Learning☆12Apr 15, 2024Updated last year
- Code Implementation for Traceback of Data Poisoning Attacks in Neural Networks☆21Aug 15, 2022Updated 3 years ago
- ☆22Mar 20, 2023Updated 3 years ago
- Reconstructive Neuron Pruning for Backdoor Defense (ICML 2023)☆39Dec 24, 2023Updated 2 years ago
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- [NeurIPS24] "What makes unlearning hard and what to do about it" [NeurIPS24] "Scalability of memorization-based machine unlearning"☆21May 24, 2025Updated 10 months ago
- ☆14Feb 26, 2025Updated last year
- Code for "On the Trade-off between Adversarial and Backdoor Robustness" (NIPS 2020)☆17Nov 11, 2020Updated 5 years ago
- ☆31Oct 7, 2021Updated 4 years ago
- Repository that contains the code for the paper titled, 'Unifying Distillation with Personalization in Federated Learning'.☆13May 31, 2021Updated 4 years ago
- ☆53Aug 17, 2024Updated last year
- Code related to the paper "Machine Unlearning of Features and Labels"☆71Feb 13, 2024Updated 2 years ago
- ☆11May 17, 2021Updated 4 years ago
- ☆18Jun 10, 2024Updated last year
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- This repository contains Python code for the paper "Learn What You Want to Unlearn: Unlearning Inversion Attacks against Machine Unlearni…☆20Apr 3, 2024Updated 2 years ago
- [Preprint] Backdoor Attacks on Federated Learning with Lottery Ticket Hypothesis☆10Sep 23, 2021Updated 4 years ago
- ☆14May 17, 2024Updated last year
- Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''☆53Nov 16, 2022Updated 3 years ago
- The code is for our NeurIPS 2019 paper: https://arxiv.org/abs/1910.04749☆34Mar 28, 2020Updated 6 years ago
- Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks (ICLR '20)☆33Nov 4, 2020Updated 5 years ago
- Official Code Implementation for the CCS 2022 Paper "On the Privacy Risks of Cell-Based NAS Architectures"☆11Nov 21, 2022Updated 3 years ago
- [ICLR 2023, Best Paper Award at ECCV’22 AROW Workshop] FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning☆60Dec 11, 2024Updated last year
- Code for the WWW21 paper "Not All Features Are Equal: Discovering Essential Features for Preserving Prediction Privacy"☆12Feb 15, 2021Updated 5 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- A modular evaluation metrics and a benchmark for large-scale federated learning☆12Jul 25, 2024Updated last year
- Official repo for NeurIPS'24 paper "WAGLE: Strategic Weight Attribution for Effective and Modular Unlearning in Large Language Models"☆19Dec 16, 2024Updated last year
- SPATL: Salient Prameter Aggregation and Transfer Learning for Heterogeneous Federated Learning☆23Nov 17, 2022Updated 3 years ago
- Poisoning Deep Learning based Recommender Model in Federated Learning Scenarios☆19Apr 27, 2022Updated 3 years ago
- [Preprint] On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping☆10Feb 27, 2020Updated 6 years ago
- verifying machine unlearning by backdooring