IBM / discrete-gaussian-differential-privacy
Code for Canonne-Kamath-Steinke paper https://arxiv.org/abs/2004.00010
☆62Updated 4 years ago
Alternatives and similar repositories for discrete-gaussian-differential-privacy:
Users that are interested in discrete-gaussian-differential-privacy are comparing it to the libraries listed below
- Analytic calibration for differential privacy with Gaussian perturbations☆47Updated 6 years ago
- This project's goal is to evaluate the privacy leakage of differentially private machine learning models.☆131Updated 2 years ago
- Code for computing tight guarantees for differential privacy☆23Updated 2 years ago
- A fast algorithm to optimally compose privacy guarantees of differentially private (DP) mechanisms to arbitrary accuracy.☆73Updated last year
- autodp: A flexible and easy-to-use package for differential privacy☆275Updated last year
- ☆80Updated 2 years ago
- This repository contains the codes for first large-scale investigation of Differentially Private Convex Optimization algorithms.☆64Updated 6 years ago
- ☆43Updated 3 years ago
- This work combines differential privacy and multi-party computation protocol to achieve distributed machine learning.☆26Updated 4 years ago
- Statistical Counterexample Detector for Differential Privacy☆28Updated last year
- Code for Auditing DPSGD☆37Updated 3 years ago
- Github pages backend for https://differentialprivacy.org☆26Updated this week
- Multiple Frequency Estimation Under Local Differential Privacy in Python☆47Updated last year
- Implementation of calibration bounds for differential privacy in the shuffle model☆22Updated 4 years ago
- ☆32Updated last year
- Hadamard Response: Communication efficient, sample optimal, linear time locally private learning of distributions☆14Updated 4 years ago
- Sample LDP implementation in Python☆125Updated last year
- Code to accompany the paper "Deep Learning with Gaussian Differential Privacy"☆49Updated 3 years ago
- ☆38Updated 2 years ago
- Differential Privacy Preservation in Deep Learning under Model Attacks☆134Updated 4 years ago
- Heterogeneous Gaussian Mechanism: Preserving Differential Privacy in Deep Learning with Provable Robustness (IJCAI'19).☆13Updated 4 years ago
- Research and experimental code related to Opacus, an open-source library for training PyTorch models with Differential Privacy☆17Updated 6 months ago
- Differential private machine learning☆191Updated 3 years ago
- An implementation of the tools described in the paper entitled "Graphical-model based estimation and inference for differential privacy"☆102Updated this week
- Code to reproduce experiments in "Antipodes of Label Differential Privacy PATE and ALIBI"☆31Updated 3 years ago
- Code for NIPS'2017 paper☆50Updated 4 years ago
- Python package for simple implementations of state-of-the-art LDP frequency estimation algorithms. Contains code for our VLDB 2021 Paper.☆74Updated last year
- A library for running membership inference attacks against ML models☆144Updated 2 years ago
- Code for Exploiting Unintended Feature Leakage in Collaborative Learning (in Oakland 2019)☆53Updated 5 years ago
- Concentrated Differentially Private Gradient Descent with Adaptive per-iteration Privacy Budget☆49Updated 7 years ago