cleverhans-lab / unrolling-sgd
code release for "Unrolling SGD: Understanding Factors Influencing Machine Unlearning" published at EuroS&P'22
☆22Updated 2 years ago
Alternatives and similar repositories for unrolling-sgd:
Users that are interested in unrolling-sgd are comparing it to the libraries listed below
- Membership Inference Attacks and Defenses in Neural Network Pruning☆28Updated 2 years ago
- Camouflage poisoning via machine unlearning☆16Updated 2 years ago
- Official implementation of "RelaxLoss: Defending Membership Inference Attacks without Losing Utility" (ICLR 2022)☆48Updated 2 years ago
- ☆23Updated 3 years ago
- ☆24Updated 2 years ago
- [NeurIPS23 (Spotlight)] "Model Sparsity Can Simplify Machine Unlearning" by Jinghan Jia*, Jiancheng Liu*, Parikshit Ram, Yuguang Yao, Gao…☆66Updated 11 months ago
- Certified Removal from Machine Learning Models☆64Updated 3 years ago
- ☆43Updated 6 months ago
- ☆11Updated 2 years ago
- Official implementation of "When Machine Unlearning Jeopardizes Privacy" (ACM CCS 2021)☆46Updated 2 years ago
- ☆33Updated last year
- ☆31Updated 5 months ago
- ☆56Updated 4 years ago
- [ICLR'21] Dataset Inference for Ownership Resolution in Machine Learning☆32Updated 2 years ago
- Code related to the paper "Machine Unlearning of Features and Labels"☆68Updated last year
- [CCS 2021] "DataLens: Scalable Privacy Preserving Training via Gradient Compression and Aggregation" by Boxin Wang*, Fan Wu*, Yunhui Long…☆37Updated 3 years ago
- ☆23Updated 2 years ago
- [USENIX Security 2022] Mitigating Membership Inference Attacks by Self-Distillation Through a Novel Ensemble Architecture☆16Updated 2 years ago
- ☆20Updated last year
- Private Adaptive Optimization with Side Information (ICML '22)☆16Updated 2 years ago
- RAB: Provable Robustness Against Backdoor Attacks☆39Updated last year
- [CVPRW'22] A privacy attack that exploits Adversarial Training models to compromise the privacy of Federated Learning systems.☆12Updated 2 years ago
- [ICML 2023] Are Diffusion Models Vulnerable to Membership Inference Attacks?☆32Updated 5 months ago
- Example of the attack described in the paper "Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization"☆21Updated 5 years ago
- Backdoor Safety Tuning (NeurIPS 2023 & 2024 Spotlight)☆25Updated 3 months ago
- A pytorch implementation of the paper "Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage".☆57Updated 2 years ago
- ☆11Updated last year
- The official implementation of USENIX Security'23 paper "Meta-Sift" -- Ten minutes or less to find a 1000-size or larger clean subset on …☆18Updated last year
- Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''☆54Updated 2 years ago
- Code for the paper: Label-Only Membership Inference Attacks☆64Updated 3 years ago