Membership Inference Attacks and Defenses in Neural Network Pruning
☆28Jul 12, 2022Updated 3 years ago
Alternatives and similar repositories for mia_prune
Users that are interested in mia_prune are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- [USENIX Security 2022] Mitigating Membership Inference Attacks by Self-Distillation Through a Novel Ensemble Architecture☆16Aug 29, 2022Updated 3 years ago
- ☆25Nov 14, 2022Updated 3 years ago
- ☆22Sep 17, 2024Updated last year
- ☆15Apr 4, 2024Updated 2 years ago
- Official implementation of "RelaxLoss: Defending Membership Inference Attacks without Losing Utility" (ICLR 2022)☆48Aug 18, 2022Updated 3 years ago
- Deploy open-source AI quickly and easily - Special Bonus Offer • AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- [ICML 2023] Are Diffusion Models Vulnerable to Membership Inference Attacks?☆43Sep 4, 2024Updated last year
- 🔒 Implementation of Shokri et al(2016) "Membership Inference Attacks against Machine Learning Models"☆34Aug 29, 2022Updated 3 years ago
- Code for the paper: Label-Only Membership Inference Attacks☆67Sep 11, 2021Updated 4 years ago
- ☆14May 8, 2024Updated last year
- ☆13Apr 12, 2022Updated 4 years ago
- Official implementation of "GAN-Leaks: A Taxonomy of Membership Inference Attacks against Generative Models" (CCS 2020)☆46Apr 22, 2022Updated 4 years ago
- ☆372Apr 8, 2026Updated 3 weeks ago
- Public implementation of the paper "On the Importance of Difficulty Calibration in Membership Inference Attacks".☆16Dec 1, 2021Updated 4 years ago
- pytorch implements data enhancement and network regularization methods: cutmix, cutout, shakedrop, mixup, Label smoothing☆11Aug 19, 2021Updated 4 years ago
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- Code for the paper "Overconfidence is a Dangerous Thing: Mitigating Membership Inference Attacks by Enforcing Less Confident Prediction" …☆13Sep 6, 2023Updated 2 years ago
- Systematic Evaluation of Membership Inference Privacy Risks of Machine Learning Models☆132Apr 9, 2024Updated 2 years ago
- Python package to create adversarial agents for membership inference attacks againts machine learning models☆46Feb 12, 2019Updated 7 years ago
- Implementation of the paper : "Membership Inference Attacks Against Machine Learning Models", Shokri et al.☆59May 12, 2019Updated 6 years ago
- Data Valuation without Training of a Model, submitted to ICLR'23☆22Dec 30, 2022Updated 3 years ago
- Shadow Attack, LiRA, Quantile Regression and RMIA implementations in PyTorch (Online version)☆14Nov 8, 2024Updated last year
- ☆13Sep 26, 2024Updated last year
- ☆13Jun 17, 2024Updated last year
- Code for AAAI 2021 Paper "Membership Privacy for Machine Learning Models Through Knowledge Transfer"☆11Apr 5, 2021Updated 5 years ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- Code for the paper "ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models"☆84Nov 22, 2021Updated 4 years ago
- Knowledge distillation (KD) from a decision-based black-box (DB3) teacher without training data.☆22May 3, 2022Updated 3 years ago
- Official repo for An Efficient Membership Inference Attack for the Diffusion Model by Proximal Initialization☆16Mar 8, 2024Updated 2 years ago
- Privacy Risks of Securing Machine Learning Models against Adversarial Examples☆46Nov 25, 2019Updated 6 years ago
- ☆20Feb 22, 2023Updated 3 years ago
- ☆17Oct 11, 2021Updated 4 years ago
- Privacy Meter: An open-source library to audit data privacy in statistical and machine learning algorithms.☆708Apr 26, 2025Updated last year
- Code for paper "Membership Inference Attacks Against Vision-Language Models"☆29Jan 25, 2025Updated last year
- Data-free knowledge distillation using Gaussian noise (NeurIPS paper)☆15Mar 24, 2023Updated 3 years ago
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Network Simulation of jitter, packet loss and time delay☆13Jul 16, 2018Updated 7 years ago
- ☆23Dec 22, 2024Updated last year
- This is an official repository for Practical Membership Inference Attacks Against Large-Scale Multi-Modal Models: A Pilot Study (ICCV2023…☆25Sep 29, 2023Updated 2 years ago
- Likelihood Ratio Attack (LiRA) in PyTorch☆16Mar 3, 2025Updated last year
- This project's goal is to evaluate the privacy leakage of differentially private machine learning models.☆136Dec 8, 2022Updated 3 years ago
- See https://github.com/ilyakava/gan for results on Imagenet 128. Code for a Multi-Hinge Loss with K+1 Conditional GANs☆23Jan 10, 2021Updated 5 years ago
- GAP: Differentially Private Graph Neural Networks with Aggregation Perturbation (USENIX Security '23)☆49Jul 3, 2023Updated 2 years ago