ml-postech / gradient-inversion-generative-image-priorLinks
☆44Updated 3 years ago
Alternatives and similar repositories for gradient-inversion-generative-image-prior
Users that are interested in gradient-inversion-generative-image-prior are comparing it to the libraries listed below
Sorting:
- [NeurIPS23 (Spotlight)] "Model Sparsity Can Simplify Machine Unlearning" by Jinghan Jia*, Jiancheng Liu*, Parikshit Ram, Yuguang Yao, Gao…☆81Updated last year
- ☆26Updated 3 years ago
- [ICML 2023] Are Diffusion Models Vulnerable to Membership Inference Attacks?☆41Updated last year
- Official repo for An Efficient Membership Inference Attack for the Diffusion Model by Proximal Initialization☆16Updated last year
- ☆32Updated last year
- ☆48Updated last year
- ☆58Updated 5 years ago
- Code for "Variational Model Inversion Attacks" Wang et al., NeurIPS2021☆22Updated 3 years ago
- ☆45Updated 2 years ago
- [USENIX Security 2022] Mitigating Membership Inference Attacks by Self-Distillation Through a Novel Ensemble Architecture☆17Updated 3 years ago
- [NeurIPS 2023] Differentially Private Image Classification by Learning Priors from Random Processes☆12Updated 2 years ago
- ☆13Updated 2 years ago
- A pytorch implementation of the paper "Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage".☆61Updated 3 years ago
- ☆54Updated 4 years ago
- Official implementation of "When Machine Unlearning Jeopardizes Privacy" (ACM CCS 2021)☆49Updated 3 years ago
- This repository contains Python code for the paper "Learn What You Want to Unlearn: Unlearning Inversion Attacks against Machine Unlearni…☆19Updated last year
- Official repo to reproduce the paper "How to Backdoor Diffusion Models?" published at CVPR 2023☆94Updated last month
- Likelihood Ratio Attack (LiRA) in PyTorch☆15Updated 8 months ago
- code release for "Unrolling SGD: Understanding Factors Influencing Machine Unlearning" published at EuroS&P'22☆23Updated 3 years ago
- Camouflage poisoning via machine unlearning☆17Updated 4 months ago
- ☆21Updated 2 years ago
- Code related to the paper "Machine Unlearning of Features and Labels"☆71Updated last year
- [KDD 2022] "Bilateral Dependency Optimization: Defending Against Model-inversion Attacks"☆24Updated last month
- Backdoor Safety Tuning (NeurIPS 2023 & 2024 Spotlight)☆26Updated 11 months ago
- Anti-Backdoor learning (NeurIPS 2021)☆84Updated 2 years ago
- Official codes for "Understanding Deep Gradient Leakage via Inversion Influence Functions", NeurIPS 2023☆16Updated 2 years ago
- [ICLR24 (Spotlight)] "SalUn: Empowering Machine Unlearning via Gradient-based Weight Saliency in Both Image Classification and Generation…☆137Updated 5 months ago
- Towards Machine Unlearning Benchmarks: Forgetting the Personal Identities in Facial Recognition Systems☆63Updated 5 months ago
- Differentially Private Diffusion Models☆104Updated last year
- R-GAP: Recursive Gradient Attack on Privacy [Accepted at ICLR 2021]☆37Updated 2 years ago