Membership Inference, Attribute Inference and Model Inversion attacks implemented using PyTorch.
☆66Oct 4, 2024Updated last year
Alternatives and similar repositories for Privacy-Attacks-in-Machine-Learning
Users that are interested in Privacy-Attacks-in-Machine-Learning are comparing it to the libraries listed below
Sorting:
- Implementation of the paper : "Membership Inference Attacks Against Machine Learning Models", Shokri et al.☆59May 12, 2019Updated 6 years ago
- ☆12Sep 26, 2024Updated last year
- This course introduced me to three cutting-edge technologies for privacy-preserving AI: Federated Learning, Differential Privacy, and Enc…☆11Sep 2, 2019Updated 6 years ago
- Repository that contains the code for the paper titled, 'Unifying Distillation with Personalization in Federated Learning'.☆13May 31, 2021Updated 4 years ago
- Official implementation of "RelaxLoss: Defending Membership Inference Attacks without Losing Utility" (ICLR 2022)☆48Aug 18, 2022Updated 3 years ago
- 🔒 Implementation of Shokri et al(2016) "Membership Inference Attacks against Machine Learning Models"☆34Aug 29, 2022Updated 3 years ago
- Code for the paper: Label-Only Membership Inference Attacks☆68Sep 11, 2021Updated 4 years ago
- Implementation of the Model Inversion Attack introduced with Model Inversion Attacks that Exploit Confidence Information and Basic Counte…☆84Feb 26, 2023Updated 3 years ago
- ☆25Nov 14, 2022Updated 3 years ago
- verifying machine unlearning by backdooring☆20Mar 25, 2023Updated 2 years ago
- An awesome list of papers on privacy attacks against machine learning☆634Mar 18, 2024Updated last year
- [arXiv:2411.10023] "Model Inversion Attacks: A Survey of Approaches and Countermeasures"☆216May 30, 2025Updated 9 months ago
- Code for Membership Inference Attack against Machine Learning Models (in Oakland 2017)☆199Nov 15, 2017Updated 8 years ago
- Official PyTorch Implementation for Continual Learning and Private Unlearning☆18Jul 19, 2022Updated 3 years ago
- Privacy Meter: An open-source library to audit data privacy in statistical and machine learning algorithms.☆700Apr 26, 2025Updated 10 months ago
- ☆32Sep 2, 2024Updated last year
- ☆10Jul 16, 2023Updated 2 years ago
- A library for running membership inference attacks against ML models☆152Dec 8, 2022Updated 3 years ago
- A PyTorch implementation for the paper FedCon: A Contrastive Framework for Federated Semi-Supervised Learning.☆24May 18, 2022Updated 3 years ago
- Code for the paper "Overconfidence is a Dangerous Thing: Mitigating Membership Inference Attacks by Enforcing Less Confident Prediction" …☆12Sep 6, 2023Updated 2 years ago
- This repository contains the implementation of DPMLBench: Holistic Evaluation of Differentially Private Machine Learning☆11Nov 24, 2023Updated 2 years ago
- Federated Learning and Membership Inference Attacks experiments on CIFAR10☆23Jan 29, 2020Updated 6 years ago
- Papers related to federated learning in top conferences (2020-2024).☆69Oct 14, 2024Updated last year
- Code for the paper "ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models"☆85Nov 22, 2021Updated 4 years ago
- Membership Inference Attacks and Defenses in Neural Network Pruning☆28Jul 12, 2022Updated 3 years ago
- Implementation of dp-based federated learning framework using PyTorch☆315Jan 3, 2026Updated 2 months ago
- ☆12Jan 5, 2023Updated 3 years ago
- [NeurIPS 2022] JAX/Haiku implementation of "On Privacy and Personalization in Cross-Silo Federated Learning"☆27Apr 16, 2023Updated 2 years ago
- Code for the DP cross-silo federated learning paper☆11Jul 10, 2020Updated 5 years ago
- GradAttack is a Python library for easy evaluation of privacy risks in public gradients in Federated Learning, as well as corresponding m…☆200May 7, 2024Updated last year
- Unofficial pytorch implementation of paper: Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures☆58Sep 28, 2025Updated 5 months ago
- ☆32May 2, 2021Updated 4 years ago
- Membership Inference Attack against Graph Neural Networks☆12Nov 9, 2022Updated 3 years ago
- ☆14Dec 8, 2022Updated 3 years ago
- [NeurIPS24] "What makes unlearning hard and what to do about it" [NeurIPS24] "Scalability of memorization-based machine unlearning"☆21May 24, 2025Updated 9 months ago
- A comprehensive toolbox for model inversion attacks and defenses, which is easy to get started.☆190Sep 23, 2025Updated 5 months ago
- privacy preserving deep learning☆15Sep 11, 2017Updated 8 years ago
- A summay of existing works on vertical federated/split learning☆15Nov 28, 2021Updated 4 years ago
- ☆12Apr 18, 2019Updated 6 years ago