csong27 / ml-model-remember
Code for Machine Learning Models that Remember Too Much (in CCS 2017)
☆30Updated 6 years ago
Related projects: ⓘ
- Privacy Risks of Securing Machine Learning Models against Adversarial Examples☆44Updated 4 years ago
- ☆45Updated 4 years ago
- Code for Exploiting Unintended Feature Leakage in Collaborative Learning (in Oakland 2019)☆53Updated 5 years ago
- Implementation of the Model Inversion Attack introduced with Model Inversion Attacks that Exploit Confidence Information and Basic Counte…☆79Updated last year
- Code for the CSF 2018 paper "Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting"☆38Updated 5 years ago
- Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks (RAID 2018)☆47Updated 5 years ago
- ABS: Scanning Neural Networks for Back-doors by Artificial Brain Stimulation☆47Updated 2 years ago
- Code for the paper: Label-Only Membership Inference Attacks☆61Updated 3 years ago
- Code for the paper "ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models"☆80Updated 2 years ago
- ☆62Updated 5 years ago
- Example of the attack described in the paper "Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization"☆22Updated 4 years ago
- Code for "Neural Network Inversion in Adversarial Setting via Background Knowledge Alignment" (CCS 2019)☆45Updated 4 years ago
- Attacking a dog vs fish classification that uses transfer learning inceptionV3☆67Updated 6 years ago
- ☆27Updated last year
- Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks (ICLR '20)☆29Updated 3 years ago
- ☆25Updated 5 years ago
- ☆31Updated 2 weeks ago
- The code for our Updates-Leak paper☆17Updated 4 years ago
- The code is for our NeurIPS 2019 paper: https://arxiv.org/abs/1910.04749☆31Updated 4 years ago
- ☆38Updated 3 years ago
- ☆22Updated 2 years ago
- Implementation of Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning paper☆19Updated 4 years ago
- Systematic Evaluation of Membership Inference Privacy Risks of Machine Learning Models☆117Updated 5 months ago
- Code for ML Doctor☆84Updated last month
- KNN Defense Against Clean Label Poisoning Attacks☆10Updated 2 years ago
- Code for "On the Trade-off between Adversarial and Backdoor Robustness" (NIPS 2020)☆16Updated 3 years ago
- Implementation of the paper : "Membership Inference Attacks Against Machine Learning Models", Shokri et al.☆47Updated 5 years ago
- Input Purification Defense Against Trojan Attacks on Deep Neural Network Systems☆23Updated 3 years ago
- CVPR 2021 Official repository for the Data-Free Model Extraction paper. https://arxiv.org/abs/2011.14779☆66Updated 5 months ago
- This project's goal is to evaluate the privacy leakage of differentially private machine learning models.☆129Updated last year