Membership Inference Attacks and Defenses in Neural Network Pruning
☆28Jul 12, 2022Updated 3 years ago
Alternatives and similar repositories for mia_prune
Users that are interested in mia_prune are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- [USENIX Security 2022] Mitigating Membership Inference Attacks by Self-Distillation Through a Novel Ensemble Architecture☆16Aug 29, 2022Updated 3 years ago
- ☆32Sep 2, 2024Updated last year
- ☆25Nov 14, 2022Updated 3 years ago
- ☆22Sep 17, 2024Updated last year
- ☆15Apr 4, 2024Updated 2 years ago
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- ☆33Nov 27, 2023Updated 2 years ago
- Official implementation of "RelaxLoss: Defending Membership Inference Attacks without Losing Utility" (ICLR 2022)☆48Aug 18, 2022Updated 3 years ago
- [ICML 2023] Are Diffusion Models Vulnerable to Membership Inference Attacks?☆43Sep 4, 2024Updated last year
- 🔒 Implementation of Shokri et al(2016) "Membership Inference Attacks against Machine Learning Models"☆34Aug 29, 2022Updated 3 years ago
- ☆13Apr 12, 2022Updated 3 years ago
- Official implementation of "GAN-Leaks: A Taxonomy of Membership Inference Attacks against Generative Models" (CCS 2020)☆46Apr 22, 2022Updated 3 years ago
- ☆372Updated this week
- Public implementation of the paper "On the Importance of Difficulty Calibration in Membership Inference Attacks".☆16Dec 1, 2021Updated 4 years ago
- ☆10Mar 20, 2023Updated 3 years ago
- Wordpress hosting with auto-scaling on Cloudways • AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- Code for the paper "Overconfidence is a Dangerous Thing: Mitigating Membership Inference Attacks by Enforcing Less Confident Prediction" …☆12Sep 6, 2023Updated 2 years ago
- Systematic Evaluation of Membership Inference Privacy Risks of Machine Learning Models☆132Apr 9, 2024Updated 2 years ago
- Implementation of the paper : "Membership Inference Attacks Against Machine Learning Models", Shokri et al.☆59May 12, 2019Updated 6 years ago
- Data Valuation without Training of a Model, submitted to ICLR'23☆22Dec 30, 2022Updated 3 years ago
- Shadow Attack, LiRA, Quantile Regression and RMIA implementations in PyTorch (Online version)☆14Nov 8, 2024Updated last year
- ☆13Sep 26, 2024Updated last year
- ☆13Jun 17, 2024Updated last year
- Code for AAAI 2021 Paper "Membership Privacy for Machine Learning Models Through Knowledge Transfer"☆11Apr 5, 2021Updated 5 years ago
- Knowledge distillation (KD) from a decision-based black-box (DB3) teacher without training data.☆22May 3, 2022Updated 3 years ago
- End-to-end encrypted cloud storage - Proton Drive • AdSpecial offer: 40% Off Yearly / 80% Off First Month. Protect your most important files, photos, and documents from prying eyes.
- Privacy Risks of Securing Machine Learning Models against Adversarial Examples☆46Nov 25, 2019Updated 6 years ago
- ☆20Feb 22, 2023Updated 3 years ago
- ☆17Oct 11, 2021Updated 4 years ago
- Code for paper "Membership Inference Attacks Against Vision-Language Models"☆28Jan 25, 2025Updated last year
- Privacy Meter: An open-source library to audit data privacy in statistical and machine learning algorithms.☆706Apr 26, 2025Updated 11 months ago
- Data-free knowledge distillation using Gaussian noise (NeurIPS paper)☆15Mar 24, 2023Updated 3 years ago
- ☆23Dec 22, 2024Updated last year
- This is an official repository for Practical Membership Inference Attacks Against Large-Scale Multi-Modal Models: A Pilot Study (ICCV2023…☆24Sep 29, 2023Updated 2 years ago
- Likelihood Ratio Attack (LiRA) in PyTorch☆17Mar 3, 2025Updated last year
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- See https://github.com/ilyakava/gan for results on Imagenet 128. Code for a Multi-Hinge Loss with K+1 Conditional GANs☆23Jan 10, 2021Updated 5 years ago
- GAP: Differentially Private Graph Neural Networks with Aggregation Perturbation (USENIX Security '23)☆49Jul 3, 2023Updated 2 years ago
- Code for ML Doctor☆91Aug 14, 2024Updated last year
- Membership Inference, Attribute Inference and Model Inversion attacks implemented using PyTorch.☆66Oct 4, 2024Updated last year
- [USENIX Security 2025] SOFT: Selective Data Obfuscation for Protecting LLM Fine-tuning against Membership Inference Attacks☆20Sep 18, 2025Updated 6 months ago
- Implementation of membership inference and model inversion attacks, extracting training data information from an ML model. Benchmarking …☆102Nov 2, 2019Updated 6 years ago
- Membership Inference Competition☆32Jun 12, 2023Updated 2 years ago