tribhuvanesh / prediction-poisoning
Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks (ICLR '20)
☆29Updated 4 years ago
Alternatives and similar repositories for prediction-poisoning:
Users that are interested in prediction-poisoning are comparing it to the libraries listed below
- Implementation of the paper "MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation".☆32Updated 3 years ago
- Input Purification Defense Against Trojan Attacks on Deep Neural Network Systems☆27Updated 3 years ago
- Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''☆54Updated 2 years ago
- ☆19Updated 2 years ago
- A minimal PyTorch implementation of Label-Consistent Backdoor Attacks☆30Updated 4 years ago
- Code for "Label-Consistent Backdoor Attacks"☆55Updated 4 years ago
- Code for the paper: Label-Only Membership Inference Attacks☆64Updated 3 years ago
- CVPR 2021 Official repository for the Data-Free Model Extraction paper. https://arxiv.org/abs/2011.14779☆71Updated 11 months ago
- The code is for our NeurIPS 2019 paper: https://arxiv.org/abs/1910.04749☆33Updated 4 years ago
- Anti-Backdoor learning (NeurIPS 2021)☆82Updated last year
- Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks (RAID 2018)☆48Updated 6 years ago
- [ICLR'21] Dataset Inference for Ownership Resolution in Machine Learning☆32Updated 2 years ago
- ☆19Updated 6 months ago
- This is for releasing the source code of the ACSAC paper "STRIP: A Defence Against Trojan Attacks on Deep Neural Networks"☆55Updated 4 months ago
- ABS: Scanning Neural Networks for Back-doors by Artificial Brain Stimulation☆50Updated 2 years ago
- Privacy Risks of Securing Machine Learning Models against Adversarial Examples☆44Updated 5 years ago
- Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks☆17Updated 5 years ago
- Code for "CloudLeak: Large-Scale Deep Learning Models Stealing Through Adversarial Examples" (NDSS 2020)☆20Updated 4 years ago
- Defending against Model Stealing via Verifying Embedded External Features☆35Updated 3 years ago
- Universal Adversarial Perturbations (UAPs) for PyTorch☆48Updated 3 years ago
- Repository for Certified Defenses for Adversarial Patch ICLR-2020☆32Updated 4 years ago
- Attacking a dog vs fish classification that uses transfer learning inceptionV3☆70Updated 6 years ago
- ☆21Updated 4 years ago
- ☆10Updated 3 years ago
- ☆26Updated 2 years ago
- ☆13Updated 3 years ago
- ☆49Updated 3 years ago
- ☆31Updated 6 months ago
- ☆42Updated last year
- Craft poisoned data using MetaPoison☆50Updated 3 years ago