☆23Aug 15, 2022Updated 3 years ago
Alternatives and similar repositories for BlindMI
Users that are interested in BlindMI are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆46Nov 10, 2019Updated 6 years ago
- Code for AAAI 2021 Paper "Membership Privacy for Machine Learning Models Through Knowledge Transfer"☆11Apr 5, 2021Updated 5 years ago
- A repo to download and preprocess the Purchase100 dataset extracted from Kaggle: Acquire Valued Shoppers Challenge☆12Jun 21, 2021Updated 4 years ago
- Code for the paper: Label-Only Membership Inference Attacks☆67Sep 11, 2021Updated 4 years ago
- Systematic Evaluation of Membership Inference Privacy Risks of Machine Learning Models☆133Apr 9, 2024Updated 2 years ago
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- ☆19Mar 6, 2023Updated 3 years ago
- ☆20Oct 28, 2025Updated 6 months ago
- ☆13Apr 12, 2022Updated 4 years ago
- Official code for the paper "Membership Inference Attacks Against Recommender Systems" (ACM CCS 2021)☆21Oct 8, 2024Updated last year
- Code for the paper "ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models"☆84Nov 22, 2021Updated 4 years ago
- PyTorch implementation of Security-Preserving Federated Learning via Byzantine-Sensitive Triplet Distance☆34Oct 11, 2024Updated last year
- Code for "Neural Network Inversion in Adversarial Setting via Background Knowledge Alignment" (CCS 2019)☆49Dec 17, 2019Updated 6 years ago
- Membership Inference Attack against Graph Neural Networks☆12Nov 9, 2022Updated 3 years ago
- ☆32Sep 2, 2024Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Differential Privacy Protection against MembershipInference Attack on Machine Learning for Genomic Data☆19Aug 4, 2020Updated 5 years ago
- Privacy Meter: An open-source library to audit data privacy in statistical and machine learning algorithms.☆711Apr 26, 2025Updated last year
- ☆20Feb 22, 2023Updated 3 years ago
- Public implementation of the paper "On the Importance of Difficulty Calibration in Membership Inference Attacks".☆16Dec 1, 2021Updated 4 years ago
- ☆22Sep 17, 2024Updated last year
- This is the code of our work CISS Certified Robustness Against Natural Language Attacks by Causal Intervention published on ICML 2022☆11Dec 6, 2022Updated 3 years ago
- Code for ML Doctor☆91Aug 14, 2024Updated last year
- ☆45Aug 4, 2023Updated 2 years ago
- This project's goal is to evaluate the privacy leakage of differentially private machine learning models.☆136Dec 8, 2022Updated 3 years ago
- End-to-end encrypted email - Proton Mail • AdSpecial offer: 40% Off Yearly / 80% Off First Month. All Proton services are open source and independently audited for security.
- Code for the CSF 2018 paper "Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting"☆37Jan 28, 2019Updated 7 years ago
- ☆25Jan 20, 2019Updated 7 years ago
- Official implementation of "GAN-Leaks: A Taxonomy of Membership Inference Attacks against Generative Models" (CCS 2020)☆46Apr 22, 2022Updated 4 years ago
- [ICCV 2023] Source code for our paper "Rickrolling the Artist: Injecting Invisible Backdoors into Text-Guided Image Generation Models".☆65Nov 20, 2023Updated 2 years ago
- Code for Auditing DPSGD☆39Feb 15, 2022Updated 4 years ago
- paper code☆28Oct 5, 2020Updated 5 years ago
- Adversarial attack on a CNN trained on MNIST dataset using Targeted I-FGSM and Targeted MI-FGM☆11Feb 17, 2018Updated 8 years ago
- ☆21Sep 21, 2021Updated 4 years ago
- Implementation of the Model Inversion Attack introduced with Model Inversion Attacks that Exploit Confidence Information and Basic Counte…☆84Feb 26, 2023Updated 3 years ago
- Serverless GPU API endpoints on Runpod - Get Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- Code for the paper "Quantifying Privacy Leakage in Graph Embedding" published in MobiQuitous 2020☆18Nov 11, 2021Updated 4 years ago
- Transformer based Trigram Blocking implementation in Tensorflow☆11Feb 26, 2020Updated 6 years ago
- An awesome list of papers on privacy attacks against machine learning☆634Mar 18, 2024Updated 2 years ago
- Code for Machine Learning Models that Remember Too Much (in CCS 2017)☆31Oct 15, 2017Updated 8 years ago
- ☆10Jan 13, 2026Updated 3 months ago
- Code for the paper "Overconfidence is a Dangerous Thing: Mitigating Membership Inference Attacks by Enforcing Less Confident Prediction" …☆13Sep 6, 2023Updated 2 years ago
- For Certified Robustness to Text Adversarial Attacks by Randomized [MASK]☆17Oct 8, 2024Updated last year