☆22Aug 15, 2022Updated 3 years ago
Alternatives and similar repositories for BlindMI
Users that are interested in BlindMI are comparing it to the libraries listed below
Sorting:
- ☆45Nov 10, 2019Updated 6 years ago
- Code for AAAI 2021 Paper "Membership Privacy for Machine Learning Models Through Knowledge Transfer"☆11Apr 5, 2021Updated 4 years ago
- ☆20Oct 28, 2025Updated 4 months ago
- Processed datasets that we have used in our research☆14Apr 30, 2020Updated 5 years ago
- Systematic Evaluation of Membership Inference Privacy Risks of Machine Learning Models☆133Apr 9, 2024Updated last year
- Official code for the paper "Membership Inference Attacks Against Recommender Systems" (ACM CCS 2021)☆20Oct 8, 2024Updated last year
- Code for the paper "ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models"☆85Nov 22, 2021Updated 4 years ago
- Privacy Risks of Securing Machine Learning Models against Adversarial Examples☆46Nov 25, 2019Updated 6 years ago
- ☆13Apr 12, 2022Updated 3 years ago
- Membership Inference Attack against Graph Neural Networks☆12Nov 9, 2022Updated 3 years ago
- Code for the paper "Quantifying Privacy Leakage in Graph Embedding" published in MobiQuitous 2020☆17Nov 11, 2021Updated 4 years ago
- Privacy Meter: An open-source library to audit data privacy in statistical and machine learning algorithms.☆702Apr 26, 2025Updated 10 months ago
- Differential Privacy Protection against MembershipInference Attack on Machine Learning for Genomic Data☆19Aug 4, 2020Updated 5 years ago
- ☆32Sep 2, 2024Updated last year
- 🔒 Implementation of Shokri et al(2016) "Membership Inference Attacks against Machine Learning Models"☆34Aug 29, 2022Updated 3 years ago
- Code for Auditing DPSGD☆37Feb 15, 2022Updated 4 years ago
- Public implementation of the paper "On the Importance of Difficulty Calibration in Membership Inference Attacks".☆16Dec 1, 2021Updated 4 years ago
- Code for the CSF 2018 paper "Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting"☆39Jan 28, 2019Updated 7 years ago
- ☆22Sep 17, 2024Updated last year
- Code for "Neural Network Inversion in Adversarial Setting via Background Knowledge Alignment" (CCS 2019)☆49Dec 17, 2019Updated 6 years ago
- Official implementation of "GAN-Leaks: A Taxonomy of Membership Inference Attacks against Generative Models" (CCS 2020)☆47Apr 22, 2022Updated 3 years ago
- ☆21Sep 21, 2021Updated 4 years ago
- ☆19Feb 22, 2023Updated 3 years ago
- Python package to create adversarial agents for membership inference attacks againts machine learning models☆46Feb 12, 2019Updated 7 years ago
- This project's goal is to evaluate the privacy leakage of differentially private machine learning models.☆136Dec 8, 2022Updated 3 years ago
- Code for ML Doctor☆92Aug 14, 2024Updated last year
- Library for training globally-robust neural networks.☆31Aug 7, 2025Updated 7 months ago
- paper code☆28Oct 5, 2020Updated 5 years ago
- ☆25Jan 20, 2019Updated 7 years ago
- An awesome list of papers on privacy attacks against machine learning☆634Mar 18, 2024Updated last year
- Training data extraction on GPT-2☆197Feb 4, 2023Updated 3 years ago
- Code for Machine Learning Models that Remember Too Much (in CCS 2017)☆31Oct 15, 2017Updated 8 years ago
- Adversarial attack on a CNN trained on MNIST dataset using Targeted I-FGSM and Targeted MI-FGM☆11Feb 17, 2018Updated 8 years ago
- ☆44Apr 25, 2023Updated 2 years ago
- Implementation of the Model Inversion Attack introduced with Model Inversion Attacks that Exploit Confidence Information and Basic Counte…☆84Feb 26, 2023Updated 3 years ago
- Official implementation of Spectro-Riemannian Graph Neural Networks (ICLR 2025)☆17May 30, 2025Updated 9 months ago
- ☆10Jan 18, 2022Updated 4 years ago
- ☆46Aug 4, 2023Updated 2 years ago
- ☆370Jan 4, 2026Updated 2 months ago