thecml / dpsgd-optimizer
Amortized version of the differentially private SGD algorithm published in "Deep Learning with Differential Privacy" by Abadi et al. Enforces privacy by clipping and sanitising the gradients with Gaussian noise during training.
☆41Updated 7 months ago
Related projects ⓘ
Alternatives and complementary repositories for dpsgd-optimizer
- An implementation of Deep Learning with Differential Privacy☆23Updated last year
- Concentrated Differentially Private Gradient Descent with Adaptive per-iteration Privacy Budget☆47Updated 6 years ago
- Code to accompany the paper "Deep Learning with Gaussian Differential Privacy"☆47Updated 3 years ago
- Implementation of Shuffled Model of Differential Privacy in Federated Learning." AISTATS, 2021.☆17Updated 2 years ago
- Code to accompany the paper "Deep Learning with Gaussian Differential Privacy"☆31Updated 3 years ago
- This repo implements several algorithms for learning with differential privacy.☆102Updated last year
- Code for NDSS 2021 Paper "Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses Against Federated Learning"☆135Updated 2 years ago
- Differential priavcy based federated learning framework by various neural networks and svm using PyTorch.☆43Updated last year
- Implementation of calibration bounds for differential privacy in the shuffle model☆23Updated 4 years ago
- Code for Data Poisoning Attacks Against Federated Learning Systems☆169Updated 3 years ago
- Differentially Private Federated Learning on Heterogeneous Data☆59Updated 2 years ago
- Local Differential Privacy for Federated Learning☆16Updated 2 years ago
- ☆32Updated 2 years ago
- Curated notebooks on how to train neural networks using differential privacy and federated learning.☆66Updated 3 years ago
- Heterogeneous Gaussian Mechanism: Preserving Differential Privacy in Deep Learning with Provable Robustness (IJCAI'19).☆13Updated 3 years ago
- This project's goal is to evaluate the privacy leakage of differentially private machine learning models.☆129Updated last year
- The official code of KDD22 paper "FLDetecotor: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clien…☆74Updated last year
- [CCS 2021] "DataLens: Scalable Privacy Preserving Training via Gradient Compression and Aggregation" by Boxin Wang*, Fan Wu*, Yunhui Long…☆37Updated 2 years ago
- Code for the CCS'22 paper "Federated Boosted Decision Trees with Differential Privacy"☆43Updated last year
- CRFL: Certifiably Robust Federated Learning against Backdoor Attacks (ICML 2021)☆71Updated 3 years ago
- Differential priavcy based federated learning framework by various neural networks and svm using PyTorch.☆30Updated 3 years ago
- Official implementation of "Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective"☆55Updated last year
- simple Differential Privacy in PyTorch☆48Updated 4 years ago
- Learning from history for Byzantine Robustness☆21Updated 3 years ago
- A sybil-resilient distributed learning protocol.☆94Updated last year
- Federated Learning and Membership Inference Attacks experiments on CIFAR10☆19Updated 4 years ago
- Membership Inference, Attribute Inference and Model Inversion attacks implemented using PyTorch.☆56Updated last month
- ☆13Updated last year
- The code for "Improved Deep Leakage from Gradients" (iDLG).☆144Updated 3 years ago
- Official implementation of "FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective"…☆37Updated 3 years ago