eth-sri / dp-sniperLinks
A machine-learning-based tool for discovering differential privacy violations in black-box algorithms.
☆25Updated 3 years ago
Alternatives and similar repositories for dp-sniper
Users that are interested in dp-sniper are comparing it to the libraries listed below
Sorting:
- CaPC is a method that enables collaborating parties to improve their own local heterogeneous machine learning models in a setting where b…☆26Updated 3 years ago
- This project's goal is to evaluate the privacy leakage of differentially private machine learning models.☆135Updated 2 years ago
- Code for Canonne-Kamath-Steinke paper https://arxiv.org/abs/2004.00010☆59Updated 5 years ago
- ☆66Updated 5 years ago
- Analytic calibration for differential privacy with Gaussian perturbations☆48Updated 6 years ago
- ☆51Updated 4 years ago
- autodp: A flexible and easy-to-use package for differential privacy☆273Updated last year
- Statistical Counterexample Detector for Differential Privacy☆28Updated last year
- Secure Aggregation for FL☆35Updated last year
- ☆80Updated 3 years ago
- Byzantine-resilient distributed SGD with TensorFlow.☆40Updated 4 years ago
- Privacy-preserving Deep Learning based on homomorphic encryption (HE)☆34Updated 3 years ago
- Privacy-preserving Federated Learning with Trusted Execution Environments☆69Updated last week
- Code for Auditing DPSGD☆37Updated 3 years ago
- Code for computing tight guarantees for differential privacy☆23Updated 2 years ago
- A fast algorithm to optimally compose privacy guarantees of differentially private (DP) mechanisms to arbitrary accuracy.☆73Updated last year
- GBDT learning + differential privacy. Standalone C++ implementation of "DPBoost" (Li et al.). There are further hardened & SGX versions o…☆8Updated 3 years ago
- ☆88Updated 5 years ago
- A library for running membership inference attacks against ML models☆149Updated 2 years ago
- Sample LDP implementation in Python☆129Updated last year
- Differential Privacy Preservation in Deep Learning under Model Attacks☆135Updated 4 years ago
- ☆32Updated 2 years ago
- Implementation of membership inference and model inversion attacks, extracting training data information from an ML model. Benchmarking …☆103Updated 5 years ago
- Code for Exploiting Unintended Feature Leakage in Collaborative Learning (in Oakland 2019)☆53Updated 6 years ago
- This repository contains the codes for first large-scale investigation of Differentially Private Convex Optimization algorithms.☆63Updated 6 years ago
- Multiple Frequency Estimation Under Local Differential Privacy in Python☆48Updated 2 years ago
- Heterogeneous Gaussian Mechanism: Preserving Differential Privacy in Deep Learning with Provable Robustness (IJCAI'19).☆13Updated 4 years ago
- Systematic Evaluation of Membership Inference Privacy Risks of Machine Learning Models☆127Updated last year
- 该项目实现了隐私保护和可验证的卷积神经网络(CNN)测试,旨在使模型开发者能够在多个测试者提供的非公开数据上向用户证明CNN性能的真实性,同时保护模型和数据的隐私。☆14Updated last year
- A secure aggregation system for private federated learning☆41Updated last year