SSGAalto / prada-protecting-against-dnn-model-stealing-attacks
Reference implementation of the PRADA model stealing defense. IEEE Euro S&P 2019.
☆33Updated 5 years ago
Alternatives and similar repositories for prada-protecting-against-dnn-model-stealing-attacks:
Users that are interested in prada-protecting-against-dnn-model-stealing-attacks are comparing it to the libraries listed below
- [ICLR'21] Dataset Inference for Ownership Resolution in Machine Learning☆32Updated 2 years ago
- Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks (ICLR '20)☆29Updated 4 years ago
- Defending against Model Stealing via Verifying Embedded External Features☆35Updated 3 years ago
- Watermarking against model extraction attacks in MLaaS. ACM MM 2021.☆33Updated 3 years ago
- ☆31Updated 6 months ago
- Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''☆54Updated 2 years ago
- ☆44Updated 4 years ago
- RAB: Provable Robustness Against Backdoor Attacks☆39Updated last year
- ☆92Updated 4 years ago
- CVPR 2021 Official repository for the Data-Free Model Extraction paper. https://arxiv.org/abs/2011.14779☆71Updated 11 months ago
- ☆25Updated 6 years ago
- ABS: Scanning Neural Networks for Back-doors by Artificial Brain Stimulation☆50Updated 2 years ago
- ☆10Updated 3 years ago
- Attacking a dog vs fish classification that uses transfer learning inceptionV3☆70Updated 6 years ago
- Implementation of the paper "MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation".☆32Updated 3 years ago
- ☆21Updated 4 years ago
- Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks (RAID 2018)☆47Updated 6 years ago
- This is for releasing the source code of the ACSAC paper "STRIP: A Defence Against Trojan Attacks on Deep Neural Networks"☆54Updated 3 months ago
- ☆79Updated 3 years ago
- A simple implementation of BadNets on MNIST☆32Updated 5 years ago
- Code for "On Adaptive Attacks to Adversarial Example Defenses"☆86Updated 4 years ago
- Code for ML Doctor☆86Updated 6 months ago
- Input Purification Defense Against Trojan Attacks on Deep Neural Network Systems☆27Updated 3 years ago
- Implementation of Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning paper☆20Updated 4 years ago
- A minimal PyTorch implementation of Label-Consistent Backdoor Attacks☆30Updated 4 years ago
- The code is for our NeurIPS 2019 paper: https://arxiv.org/abs/1910.04749☆33Updated 4 years ago
- ☆64Updated 4 years ago
- Simple yet effective targeted transferable attack (NeurIPS 2021)☆48Updated 2 years ago
- ☆45Updated 5 years ago
- Code for "Label-Consistent Backdoor Attacks"☆53Updated 4 years ago