shash42 / Evaluating-Inexact-Unlearning
☆11Updated last year
Alternatives and similar repositories for Evaluating-Inexact-Unlearning:
Users that are interested in Evaluating-Inexact-Unlearning are comparing it to the libraries listed below
- [NeurIPS23 (Spotlight)] "Model Sparsity Can Simplify Machine Unlearning" by Jinghan Jia*, Jiancheng Liu*, Parikshit Ram, Yuguang Yao, Gao…☆66Updated 11 months ago
- ☆56Updated 4 years ago
- ☆43Updated 5 months ago
- Backdoor Safety Tuning (NeurIPS 2023 & 2024 Spotlight)☆25Updated 2 months ago
- [ECCV24] "Challenging Forgets: Unveiling the Worst-Case Forget Sets in Machine Unlearning" by Chongyu Fan*, Jiancheng Liu*, Alfred Hero, …☆22Updated 4 months ago
- Certified Removal from Machine Learning Models☆63Updated 3 years ago
- [ICLR 2023, Spotlight] Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning☆30Updated last year
- ☆23Updated 3 years ago
- [ICLR 2022 official code] Robust Learning Meets Generative Models: Can Proxy Distributions Improve Adversarial Robustness?☆29Updated 2 years ago
- ☆32Updated last year
- [ICML 2023] Are Diffusion Models Vulnerable to Membership Inference Attacks?☆32Updated 5 months ago
- Unofficial implementation of the DeepMind papers "Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples…☆95Updated 2 years ago
- Camouflage poisoning via machine unlearning☆16Updated 2 years ago
- the paper "Geometry-aware Instance-reweighted Adversarial Training" ICLR 2021 oral☆59Updated 3 years ago
- [NeurIPS 2023] Differentially Private Image Classification by Learning Priors from Random Processes☆12Updated last year
- Code for the CVPR 2020 article "Adversarial Vertex mixup: Toward Better Adversarially Robust Generalization"☆13Updated 4 years ago
- Code relative to "Adversarial robustness against multiple and single $l_p$-threat models via quick fine-tuning of robust classifiers"☆18Updated 2 years ago
- Official repo for An Efficient Membership Inference Attack for the Diffusion Model by Proximal Initialization☆12Updated 11 months ago
- The source code for ICML2021 paper When Does Data Augmentation Help With Membership Inference Attacks?☆8Updated 3 years ago
- ☆48Updated 3 years ago
- code release for "Unrolling SGD: Understanding Factors Influencing Machine Unlearning" published at EuroS&P'22☆22Updated 2 years ago
- Implementation for <Understanding Robust Overftting of Adversarial Training and Beyond> in ICML'22.☆12Updated 2 years ago
- [USENIX Security 2022] Mitigating Membership Inference Attacks by Self-Distillation Through a Novel Ensemble Architecture☆16Updated 2 years ago
- ☆11Updated last year
- ☆26Updated 8 months ago
- Github repo for One-shot Neural Backdoor Erasing via Adversarial Weight Masking (NeurIPS 2022)☆14Updated 2 years ago
- ☆17Updated last year
- Code for the paper "A Light Recipe to Train Robust Vision Transformers" [SaTML 2023]☆53Updated 2 years ago
- [ICLR 2022] Reliable Adversarial Distillation with Unreliable Teachers☆21Updated 2 years ago
- Official implementation of "RelaxLoss: Defending Membership Inference Attacks without Losing Utility" (ICLR 2022)☆48Updated 2 years ago