jeiks / Stealing_DL_ModelsLinks
Copycat CNN
☆28Updated last year
Alternatives and similar repositories for Stealing_DL_Models
Users that are interested in Stealing_DL_Models are comparing it to the libraries listed below
Sorting:
- Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks (ICLR '20)☆31Updated 4 years ago
- Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching☆102Updated 9 months ago
- Code for "CloudLeak: Large-Scale Deep Learning Models Stealing Through Adversarial Examples" (NDSS 2020)☆20Updated 4 years ago
- CVPR 2021 Official repository for the Data-Free Model Extraction paper. https://arxiv.org/abs/2011.14779☆72Updated last year
- Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks (RAID 2018)☆47Updated 6 years ago
- ☆96Updated 4 years ago
- Input-aware Dynamic Backdoor Attack (NeurIPS 2020)☆35Updated 10 months ago
- Code for "On Adaptive Attacks to Adversarial Example Defenses"☆87Updated 4 years ago
- A unified benchmark problem for data poisoning attacks☆155Updated last year
- This is for releasing the source code of the ACSAC paper "STRIP: A Defence Against Trojan Attacks on Deep Neural Networks"☆57Updated 6 months ago
- Pytorch implementation of Adversarial Patch on ImageNet (arXiv: https://arxiv.org/abs/1712.09665)☆62Updated 5 years ago
- Implementation of the paper "MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation".☆30Updated 3 years ago
- WaNet - Imperceptible Warping-based Backdoor Attack (ICLR 2021)☆124Updated 6 months ago
- ☆23Updated 3 years ago
- ABS: Scanning Neural Networks for Back-doors by Artificial Brain Stimulation☆52Updated 3 years ago
- A curated list of academic events on AI Security & Privacy☆152Updated 9 months ago
- Universal Adversarial Perturbations (UAPs) for PyTorch☆48Updated 3 years ago
- Implementation of AGNs, proposed in: M. Sharif, S. Bhagavatula, L. Bauer, M. Reiter. "A General Framework for Adversarial Examples with O…☆37Updated 4 years ago
- ☆85Updated 4 years ago
- Official Repository for the AAAI-20 paper "Hidden Trigger Backdoor Attacks"☆127Updated last year
- ☆65Updated last year
- Implementation of Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning paper☆20Updated 5 years ago
- This repository contains the implementation of three adversarial example attack methods FGSM, IFGSM, MI-FGSM and one Distillation as defe…☆129Updated 4 years ago
- Input Purification Defense Against Trojan Attacks on Deep Neural Network Systems☆27Updated 4 years ago
- Knockoff Nets: Stealing Functionality of Black-Box Models☆99Updated 2 years ago
- A curated list of papers on adversarial machine learning (adversarial examples and defense methods).☆210Updated 3 years ago
- A repository to quickly generate synthetic data and associated trojaned deep learning models☆77Updated last year
- Implementation of membership inference and model inversion attacks, extracting training data information from an ML model. Benchmarking …☆103Updated 5 years ago
- ☆51Updated 3 years ago
- Code for "PatchCleanser: Certifiably Robust Defense against Adversarial Patches for Any Image Classifier"☆41Updated 2 years ago