AhmedSalem2 / ML-Leaks
Code for the paper "ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models"
☆83Updated 3 years ago
Alternatives and similar repositories for ML-Leaks:
Users that are interested in ML-Leaks are comparing it to the libraries listed below
- Implementation of the paper : "Membership Inference Attacks Against Machine Learning Models", Shokri et al.☆59Updated 5 years ago
- ☆45Updated 5 years ago
- Code for Membership Inference Attack against Machine Learning Models (in Oakland 2017)☆193Updated 7 years ago
- Code for the paper: Label-Only Membership Inference Attacks☆65Updated 3 years ago
- Code for ML Doctor☆87Updated 8 months ago
- Membership Inference, Attribute Inference and Model Inversion attacks implemented using PyTorch.☆58Updated 6 months ago
- ☆23Updated 2 years ago
- Implementation of the Model Inversion Attack introduced with Model Inversion Attacks that Exploit Confidence Information and Basic Counte…☆83Updated 2 years ago
- Systematic Evaluation of Membership Inference Privacy Risks of Machine Learning Models☆125Updated last year
- Code for Machine Learning Models that Remember Too Much (in CCS 2017)☆30Updated 7 years ago
- Code for Exploiting Unintended Feature Leakage in Collaborative Learning (in Oakland 2019)☆53Updated 5 years ago
- Code for "Neural Network Inversion in Adversarial Setting via Background Knowledge Alignment" (CCS 2019)☆46Updated 5 years ago
- paper code☆25Updated 4 years ago
- ABS: Scanning Neural Networks for Back-doors by Artificial Brain Stimulation☆50Updated 2 years ago
- Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks (RAID 2018)☆48Updated 6 years ago
- Attacking a dog vs fish classification that uses transfer learning inceptionV3☆70Updated 7 years ago
- ☆68Updated 2 years ago
- ☆27Updated 3 years ago
- Code & supplementary material of the paper Label Inference Attacks Against Federated Learning on Usenix Security 2022.☆84Updated last year
- The code for our Updates-Leak paper☆16Updated 4 years ago
- Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks (ICLR '20)☆30Updated 4 years ago
- ICML 2022 code for "Neurotoxin: Durable Backdoors in Federated Learning" https://arxiv.org/abs/2206.10341☆71Updated 2 years ago
- Python package to create adversarial agents for membership inference attacks againts machine learning models☆46Updated 6 years ago
- This is for releasing the source code of the ACSAC paper "STRIP: A Defence Against Trojan Attacks on Deep Neural Networks"☆55Updated 5 months ago
- Code for the CSF 2018 paper "Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting"☆38Updated 6 years ago
- This project's goal is to evaluate the privacy leakage of differentially private machine learning models.☆131Updated 2 years ago
- ☆19Updated 7 months ago
- A library for running membership inference attacks against ML models☆143Updated 2 years ago
- This is a simple backdoor model for federated learning.We use MNIST as the original data set for data attack and we use CIFAR-10 data set…☆14Updated 4 years ago
- [ICML 2023] Official code implementation of "Chameleon: Adapting to Peer Images for Planting Durable Backdoors in Federated Learning (htt…☆40Updated 3 months ago