thecml / dpsgd-optimizer
Amortized version of the differentially private SGD algorithm published in "Deep Learning with Differential Privacy" by Abadi et al. Enforces privacy by clipping and sanitising the gradients with Gaussian noise during training.
☆41Updated 10 months ago
Alternatives and similar repositories for dpsgd-optimizer:
Users that are interested in dpsgd-optimizer are comparing it to the libraries listed below
- Concentrated Differentially Private Gradient Descent with Adaptive per-iteration Privacy Budget☆49Updated 6 years ago
- An implementation of Deep Learning with Differential Privacy☆24Updated last year
- Differentially Private Federated Learning on Heterogeneous Data☆60Updated 2 years ago
- Curated notebooks on how to train neural networks using differential privacy and federated learning.☆66Updated 4 years ago
- Implementation of Shuffled Model of Differential Privacy in Federated Learning." AISTATS, 2021.☆17Updated 2 years ago
- Code for NDSS 2021 Paper "Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses Against Federated Learning"☆140Updated 2 years ago
- Local Differential Privacy for Federated Learning☆16Updated 2 years ago
- Code to accompany the paper "Deep Learning with Gaussian Differential Privacy"☆49Updated 3 years ago
- Differential priavcy based federated learning framework by various neural networks and svm using PyTorch.☆45Updated 2 years ago
- Code to accompany the paper "Deep Learning with Gaussian Differential Privacy"☆31Updated 3 years ago
- ☆33Updated 2 years ago
- Differential priavcy based federated learning framework by various neural networks and svm using PyTorch.☆30Updated 4 years ago
- Implementation of calibration bounds for differential privacy in the shuffle model☆23Updated 4 years ago
- The official code of KDD22 paper "FLDetecotor: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clien…☆74Updated last year
- Code for the CCS'22 paper "Federated Boosted Decision Trees with Differential Privacy"☆44Updated last year
- This repo implements several algorithms for learning with differential privacy.☆104Updated 2 years ago
- CRFL: Certifiably Robust Federated Learning against Backdoor Attacks (ICML 2021)☆71Updated 3 years ago
- Code for the paper "Bayesian Differential Privacy for Machine Learning"☆22Updated 4 years ago
- A sybil-resilient distributed learning protocol.☆100Updated last year
- Heterogeneous Gaussian Mechanism: Preserving Differential Privacy in Deep Learning with Provable Robustness (IJCAI'19).☆13Updated 3 years ago
- This is a simple backdoor model for federated learning.We use MNIST as the original data set for data attack and we use CIFAR-10 data set…☆14Updated 4 years ago
- Official implementation of "Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective"☆55Updated last year
- Official implementation of "FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective"…☆39Updated 3 years ago
- PyTorch implementation of Security-Preserving Federated Learning via Byzantine-Sensitive Triplet Distance☆33Updated 4 months ago
- Analytic calibration for differential privacy with Gaussian perturbations☆46Updated 6 years ago
- Code for "Analyzing Federated Learning through an Adversarial Lens" https://arxiv.org/abs/1811.12470☆147Updated 2 years ago
- Robust aggregation for federated learning with the RFA algorithm.☆47Updated 2 years ago
- Code for Data Poisoning Attacks Against Federated Learning Systems☆180Updated 3 years ago
- This is the code for our paper `Robust Federated Learning with Attack-Adaptive Aggregation' accepted by FTL-IJCAI'21.☆44Updated last year
- Implementing the algorithm from our paper: "A Reputation Mechanism Is All You Need: Collaborative Fairness and Adversarial Robustness in …☆34Updated 8 months ago