π Implementation of Shokri et al(2016) "Membership Inference Attacks against Machine Learning Models"
β34Aug 29, 2022Updated 3 years ago
Alternatives and similar repositories for MIA
Users that are interested in MIA are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Implementation of the paper : "Membership Inference Attacks Against Machine Learning Models", Shokri et al.β59May 12, 2019Updated 6 years ago
- Code for Membership Inference Attack against Machine Learning Models (in Oakland 2017)β199Nov 15, 2017Updated 8 years ago
- Shadow Attack, LiRA, Quantile Regression and RMIA implementations in PyTorch (Online version)β14Nov 8, 2024Updated last year
- β15Apr 4, 2024Updated last year
- β19Feb 22, 2023Updated 3 years ago
- Membership Inference Attacks and Defenses in Neural Network Pruningβ28Jul 12, 2022Updated 3 years ago
- β371Jan 4, 2026Updated 2 months ago
- β32Sep 2, 2024Updated last year
- Official implementation of "RelaxLoss: Defending Membership Inference Attacks without Losing Utility" (ICLR 2022)β48Aug 18, 2022Updated 3 years ago
- final-project-level3-nlp-02 created by GitHub Classroomβ11Dec 31, 2021Updated 4 years ago
- Code for the paper: Label-Only Membership Inference Attacksβ68Sep 11, 2021Updated 4 years ago
- β22Aug 15, 2022Updated 3 years ago
- Gaussian Membership Inference Privacy (NeurIPS 2023)β12Jul 27, 2024Updated last year
- Membership Inference Attack against Graph Neural Networksβ12Nov 9, 2022Updated 3 years ago
- β11Dec 18, 2024Updated last year
- Membership Inference, Attribute Inference and Model Inversion attacks implemented using PyTorch.β66Oct 4, 2024Updated last year
- An unofficial pyotrch implementation of "ML-Leaks:Model and Data Independent Membership Inference Attacks and Defenses on ML Models"β11Dec 23, 2023Updated 2 years ago
- π Transformer Model for Lip Reading in the Wild (LRW) Benchmarkβ12Mar 18, 2023Updated 3 years ago
- Privacy Meter: An open-source library to audit data privacy in statistical and machine learning algorithms.β704Apr 26, 2025Updated 10 months ago
- β12Sep 26, 2024Updated last year
- A library for running membership inference attacks against ML modelsβ152Dec 8, 2022Updated 3 years ago
- β33Nov 27, 2023Updated 2 years ago
- Modular framework for property inference attacks on deep neural networksβ18Jun 8, 2023Updated 2 years ago
- Code for ML Doctorβ91Aug 14, 2024Updated last year
- β13Oct 20, 2022Updated 3 years ago
- DeepLearning.AI Short Courses of Generative AI materials / μμ±ν AI κ°μ λ²μ μλ£β25Apr 23, 2024Updated last year
- TPUμμ νκ΅μ΄μ© LLM μΆλ‘ μ μν Jax/Flax ꡬν체μ λλ€.β12Jun 12, 2023Updated 2 years ago
- FederBoost's Federated Gradient Boosting Decision Tree Algorithm, Federated enabled Membership Inferenceβ16Dec 13, 2023Updated 2 years ago
- Source code of NAACL 2025 Findings "Scaling Up Membership Inference: When and How Attacks Succeed on Large Language Models"β15Dec 16, 2025Updated 3 months ago
- β45Nov 10, 2019Updated 6 years ago
- β20Oct 28, 2025Updated 4 months ago
- Official repo for An Efficient Membership Inference Attack for the Diffusion Model by Proximal Initializationβ16Mar 8, 2024Updated 2 years ago
- β10Dec 28, 2023Updated 2 years ago
- β19Mar 6, 2023Updated 3 years ago
- GBDT-based model with efficient unlearning (SIGMOD 2023)β10Sep 7, 2025Updated 6 months ago
- [ICML 2023] Are Diffusion Models Vulnerable to Membership Inference Attacks?β43Sep 4, 2024Updated last year
- AdvDoor: Adversarial Backdoor Attack of Deep Learning Systemβ32Nov 5, 2024Updated last year
- Adversarial attack on a CNN trained on MNIST dataset using Targeted I-FGSM and Targeted MI-FGMβ11Feb 17, 2018Updated 8 years ago
- Official implementation of our paper "Separate the Wheat from the Chaff: Model Deficiency Unlearning via Parameter-Efficient Module Operaβ¦β11Sep 20, 2024Updated last year