loretanr / dp-gbdt
GBDT learning + differential privacy. Standalone C++ implementation of "DPBoost" (Li et al.). There are further hardened & SGX versions of the code.
☆8Updated 2 years ago
Related projects ⓘ
Alternatives and complementary repositories for dp-gbdt
- Implementation of calibration bounds for differential privacy in the shuffle model☆23Updated 4 years ago
- Concentrated Differentially Private Gradient Descent with Adaptive per-iteration Privacy Budget☆47Updated 6 years ago
- An implementation of Secure Aggregation algorithm based on "Practical Secure Aggregation for Privacy-Preserving Machine Learning (Bonawit…☆81Updated 5 years ago
- Code to accompany the paper "Deep Learning with Gaussian Differential Privacy"☆47Updated 3 years ago
- A simple Python implementation of a secure aggregation protocole for federated learning.☆34Updated last year
- Code for the CCS'22 paper "Federated Boosted Decision Trees with Differential Privacy"☆43Updated last year
- Amortized version of the differentially private SGD algorithm published in "Deep Learning with Differential Privacy" by Abadi et al. Enfo…☆41Updated 7 months ago
- This project's goal is to evaluate the privacy leakage of differentially private machine learning models.☆129Updated last year
- Analytic calibration for differential privacy with Gaussian perturbations☆44Updated 6 years ago
- Code for Data Poisoning Attacks Against Federated Learning Systems☆169Updated 3 years ago
- ☆32Updated 2 years ago
- IEEE TIFS'20: VeriFL: Communication-Efficient and Fast Verifiable Aggregation for Federated Learning☆22Updated 2 years ago
- Privacy-Preserving Gradient Boosting Decision Trees (AAAI 2020)☆24Updated last year
- An implementation of Deep Learning with Differential Privacy☆23Updated last year
- ☆27Updated last year
- Code for Exploiting Unintended Feature Leakage in Collaborative Learning (in Oakland 2019)☆53Updated 5 years ago
- Heterogeneous Gaussian Mechanism: Preserving Differential Privacy in Deep Learning with Provable Robustness (IJCAI'19).☆13Updated 3 years ago
- This is an implementation for paper "A Hybrid Approach to Privacy Preserving Federated Learning" (https://arxiv.org/pdf/1812.03224.pdf)☆21Updated 4 years ago
- personal implementation of secure aggregation protocol☆42Updated 9 months ago
- A secure aggregation system for private federated learning☆36Updated 6 months ago
- Code for NDSS 2021 Paper "Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses Against Federated Learning"☆135Updated 2 years ago
- Differential Privacy Preservation in Deep Learning under Model Attacks☆132Updated 3 years ago
- A sybil-resilient distributed learning protocol.☆94Updated last year
- An implementation of "Data Synthesis via Differentially Private Markov Random Fields"☆11Updated 7 months ago
- Code for Membership Inference Attack against Machine Learning Models (in Oakland 2017)☆187Updated 7 years ago
- Secure Aggregation for FL☆34Updated 11 months ago
- SAFEFL: MPC-friendly Framework for Private and Robust Federated Learning☆29Updated last year
- A library for running membership inference attacks against ML models☆139Updated last year
- Implementation of Shuffled Model of Differential Privacy in Federated Learning." AISTATS, 2021.☆17Updated 2 years ago
- DBA: Distributed Backdoor Attacks against Federated Learning (ICLR 2020)☆176Updated 3 years ago