ayushkumartarun / deep-regression-unlearning
Official repo of the paper Deep Regression Unlearning accepted in ICML 2023
☆13Updated last year
Alternatives and similar repositories for deep-regression-unlearning:
Users that are interested in deep-regression-unlearning are comparing it to the libraries listed below
- Camouflage poisoning via machine unlearning☆16Updated 2 years ago
- Official repo of the paper Zero-Shot Machine Unlearning accepted in IEEE Transactions on Information Forensics and Security☆38Updated last year
- Reconstructive Neuron Pruning for Backdoor Defense (ICML 2023)☆35Updated last year
- Official implementation of "RelaxLoss: Defending Membership Inference Attacks without Losing Utility" (ICLR 2022)☆48Updated 2 years ago
- Code for Backdoor Attacks Against Dataset Distillation☆32Updated last year
- verifying machine unlearning by backdooring☆20Updated last year
- ☆12Updated 9 months ago
- The official implementation of USENIX Security'23 paper "Meta-Sift" -- Ten minutes or less to find a 1000-size or larger clean subset on …☆18Updated last year
- ☆54Updated 2 years ago
- Pytorch implementation of backdoor unlearning.☆17Updated 2 years ago
- This is the repository that introduces research topics related to protecting intellectual property (IP) of AI from a data-centric perspec…☆22Updated last year
- ☆10Updated 4 years ago
- Backdoor Safety Tuning (NeurIPS 2023 & 2024 Spotlight)☆25Updated 3 months ago
- code release for "Unrolling SGD: Understanding Factors Influencing Machine Unlearning" published at EuroS&P'22☆22Updated 2 years ago
- [ICLR 2023, Best Paper Award at ECCV’22 AROW Workshop] FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning☆52Updated 2 months ago
- Official Implementation of NIPS 2022 paper Pre-activation Distributions Expose Backdoor Neurons☆14Updated 2 years ago
- How Robust are Randomized Smoothing based Defenses to Data Poisoning? (CVPR 2021)☆13Updated 3 years ago
- RAB: Provable Robustness Against Backdoor Attacks☆39Updated last year
- ☆24Updated 2 years ago
- Code for the paper "Deep Partition Aggregation: Provable Defenses against General Poisoning Attacks"☆12Updated 2 years ago
- ☆15Updated last year
- ☆64Updated 4 years ago
- A pytorch implementation of the paper "Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage".☆57Updated 2 years ago
- Surrogate Model Extension (SME): A Fast and Accurate Weight Update Attack on Federated Learning [Accepted at ICML 2023]☆11Updated 10 months ago
- Official implementation of "When Machine Unlearning Jeopardizes Privacy" (ACM CCS 2021)☆46Updated 2 years ago
- [ICLR2024] "Backdoor Federated Learning by Poisoning Backdoor-Critical Layers"☆29Updated 2 months ago
- ☆19Updated 3 years ago
- ☆68Updated 2 years ago
- A PyTorch based repository for Federate Learning with Differential Privacy☆15Updated last year
- ☆24Updated 2 years ago