thecml / dpsgd-optimizerLinks
Amortized version of the differentially private SGD algorithm published in "Deep Learning with Differential Privacy" by Abadi et al. Enforces privacy by clipping and sanitising the gradients with Gaussian noise during training.
☆40Updated last year
Alternatives and similar repositories for dpsgd-optimizer
Users that are interested in dpsgd-optimizer are comparing it to the libraries listed below
Sorting:
- Code for NDSS 2021 Paper "Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses Against Federated Learning"☆147Updated 3 years ago
- Code for Data Poisoning Attacks Against Federated Learning Systems☆197Updated 4 years ago
- Implementation of dp-based federated learning framework using PyTorch☆304Updated 2 years ago
- A sybil-resilient distributed learning protocol.☆105Updated last year
- The official code of KDD22 paper "FLDetecotor: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clien…☆84Updated 2 years ago
- Code to accompany the paper "Deep Learning with Gaussian Differential Privacy"☆49Updated 4 years ago
- This repo implements several algorithms for learning with differential privacy.☆109Updated 2 years ago
- ⚔️ Blades: A Unified Benchmark Suite for Attacks and Defenses in Federated Learning☆143Updated 6 months ago
- Implementation of calibration bounds for differential privacy in the shuffle model☆22Updated 4 years ago
- This repository contains the official implementation for the manuscript: Make Landscape Flatter in Differentially Private Federated Lear…☆51Updated 2 years ago
- Differentially Private Federated Learning on Heterogeneous Data☆66Updated 3 years ago
- Curated notebooks on how to train neural networks using differential privacy and federated learning.☆68Updated 4 years ago
- Membership Inference, Attribute Inference and Model Inversion attacks implemented using PyTorch.☆63Updated 10 months ago
- DBA: Distributed Backdoor Attacks against Federated Learning (ICLR 2020)☆196Updated 4 years ago
- Official implementation of "FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective"…☆42Updated 3 years ago
- Code for "Analyzing Federated Learning through an Adversarial Lens" https://arxiv.org/abs/1811.12470☆151Updated 2 years ago
- Concentrated Differentially Private Gradient Descent with Adaptive per-iteration Privacy Budget☆49Updated 7 years ago
- ☆35Updated 2 years ago
- Official implementation of "Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective"☆57Updated 2 years ago
- An implementation of Deep Learning with Differential Privacy☆25Updated 2 years ago
- ☆37Updated 3 years ago
- PyTorch implementation of Security-Preserving Federated Learning via Byzantine-Sensitive Triplet Distance☆34Updated 10 months ago
- Ditto: Fair and Robust Federated Learning Through Personalization (ICML '21)☆145Updated 3 years ago
- Code for the CCS'22 paper "Federated Boosted Decision Trees with Differential Privacy"☆46Updated last year
- ☆174Updated 10 months ago
- Code to accompany the paper "Deep Learning with Gaussian Differential Privacy"☆33Updated 4 years ago
- ☆42Updated 2 years ago
- The code for "Improved Deep Leakage from Gradients" (iDLG).☆153Updated 4 years ago
- Adversarial attacks and defenses against federated learning.☆19Updated 2 years ago
- Privacy Preserving Vertical Federated Learning☆218Updated 2 years ago