SSGAalto / prada-protecting-against-dnn-model-stealing-attacks
Reference implementation of the PRADA model stealing defense. IEEE Euro S&P 2019.
☆33Updated 5 years ago
Alternatives and similar repositories for prada-protecting-against-dnn-model-stealing-attacks:
Users that are interested in prada-protecting-against-dnn-model-stealing-attacks are comparing it to the libraries listed below
- Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks (ICLR '20)☆29Updated 4 years ago
- Watermarking against model extraction attacks in MLaaS. ACM MM 2021.☆33Updated 3 years ago
- [ICLR'21] Dataset Inference for Ownership Resolution in Machine Learning☆31Updated 2 years ago
- ABS: Scanning Neural Networks for Back-doors by Artificial Brain Stimulation☆49Updated 2 years ago
- Implementation of the paper "MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation".☆29Updated 3 years ago
- CVPR 2021 Official repository for the Data-Free Model Extraction paper. https://arxiv.org/abs/2011.14779☆69Updated 9 months ago
- Attacking a dog vs fish classification that uses transfer learning inceptionV3☆71Updated 6 years ago
- Universal Adversarial Perturbations (UAPs) for PyTorch☆48Updated 3 years ago
- ☆10Updated 3 years ago
- Privacy Risks of Securing Machine Learning Models against Adversarial Examples☆44Updated 5 years ago
- Code for "Label-Consistent Backdoor Attacks"☆52Updated 4 years ago
- Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''☆54Updated 2 years ago
- A simple implementation of BadNets on MNIST☆32Updated 5 years ago
- Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks (RAID 2018)☆46Updated 6 years ago
- RAB: Provable Robustness Against Backdoor Attacks☆39Updated last year
- ☆31Updated 4 months ago
- Defending against Model Stealing via Verifying Embedded External Features☆34Updated 2 years ago
- ☆45Updated 3 years ago
- ☆79Updated 3 years ago
- ☆21Updated 4 years ago
- Code for ML Doctor☆85Updated 5 months ago
- Craft poisoned data using MetaPoison☆49Updated 3 years ago
- ☆23Updated 2 years ago
- Example of the attack described in the paper "Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization"☆21Updated 5 years ago
- ICCV 2021, We find most existing triggers of backdoor attacks in deep learning contain severe artifacts in the frequency domain. This Rep…☆43Updated 2 years ago
- Code Repository for the Paper ---Revisiting the Assumption of Latent Separability for Backdoor Defenses (ICLR 2023)☆36Updated last year
- A minimal PyTorch implementation of Label-Consistent Backdoor Attacks☆29Updated 3 years ago
- This is for releasing the source code of the ACSAC paper "STRIP: A Defence Against Trojan Attacks on Deep Neural Networks"☆53Updated 2 months ago
- Codes for NeurIPS 2021 paper "Adversarial Neuron Pruning Purifies Backdoored Deep Models"☆57Updated last year
- This is the implementation for CVPR 2022 Oral paper "Better Trigger Inversion Optimization in Backdoor Scanning."☆24Updated 2 years ago