deabfc / dp-promise
The code of "dp-promise: Differentially Private Diffusion Probabilistic Models for Image Synthesis"
☆15Updated 11 months ago
Alternatives and similar repositories for dp-promise:
Users that are interested in dp-promise are comparing it to the libraries listed below
- ☆25Updated last year
- ☆26Updated last year
- ☆19Updated last year
- ☆25Updated 3 years ago
- Local Differential Privacy for Federated Learning☆16Updated 2 years ago
- ☆48Updated last year
- DPSUR☆26Updated last month
- Code for the paper: Label-Only Membership Inference Attacks☆64Updated 3 years ago
- Implementation of calibration bounds for differential privacy in the shuffle model☆23Updated 4 years ago
- ☆13Updated 9 months ago
- The implementatioin code of paper: “A Practical Clean-Label Backdoor Attack with Limited Information in Vertical Federated Learning”☆11Updated last year
- [USENIX Security 2024] PrivImage: Differentially Private Synthetic Image Generation using Diffusion Models with Semantic-Aware Pretrainin…☆19Updated 4 months ago
- Official implementation of "When Machine Unlearning Jeopardizes Privacy" (ACM CCS 2021)☆47Updated 2 years ago
- [ICML 2023] Official code implementation of "Chameleon: Adapting to Peer Images for Planting Durable Backdoors in Federated Learning (htt…☆38Updated 2 months ago
- Code to reproduce experiments in "Antipodes of Label Differential Privacy PATE and ALIBI"☆30Updated 2 years ago
- Code for ML Doctor☆88Updated 7 months ago
- This repo implements several algorithms for learning with differential privacy.☆106Updated 2 years ago
- Code related to the paper "Machine Unlearning of Features and Labels"☆69Updated last year
- ☆26Updated 3 years ago
- ☆14Updated last year
- Fast, memory-efficient, scalable optimization of deep learning with differential privacy☆115Updated 2 months ago
- This repository contains Python code for the paper "Learn What You Want to Unlearn: Unlearning Inversion Attacks against Machine Unlearni…☆14Updated 11 months ago
- The code of AAAI-21 paper titled "Defending against Backdoors in Federated Learning with Robust Learning Rate".☆31Updated 2 years ago
- Membership Inference, Attribute Inference and Model Inversion attacks implemented using PyTorch.☆58Updated 5 months ago
- ☆38Updated 3 years ago
- [ICML 2023] Are Diffusion Models Vulnerable to Membership Inference Attacks?☆34Updated 6 months ago
- Amortized version of the differentially private SGD algorithm published in "Deep Learning with Differential Privacy" by Abadi et al. Enfo…☆41Updated 11 months ago
- [ICLR 2023, Best Paper Award at ECCV’22 AROW Workshop] FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning☆53Updated 3 months ago
- ☆69Updated 2 years ago
- The code of the attack scheme in the paper "Backdoor Attack Against Split Neural Network-Based Vertical Federated Learning"☆17Updated last year