um-dsp / MorphenceLinks
Morphence: An implementation of a moving target defense against adversarial example attacks demonstrated for image classification models trained on MNIST and CIFAR-10.
☆23Updated last year
Alternatives and similar repositories for Morphence
Users that are interested in Morphence are comparing it to the libraries listed below
Sorting:
- Foolbox implementation for NeurIPS 2021 Paper: "Fast Minimum-norm Adversarial Attacks through Adaptive Norm Constraints".☆24Updated 3 years ago
- Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks (ICLR '20)☆33Updated 5 years ago
- Code for the paper "Deep Partition Aggregation: Provable Defenses against General Poisoning Attacks"☆13Updated 3 years ago
- Attacking a dog vs fish classification that uses transfer learning inceptionV3☆74Updated 7 years ago
- Repository for Certified Defenses for Adversarial Patch ICLR-2020☆34Updated 5 years ago
- Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks (RAID 2018)☆47Updated 7 years ago
- This repository contains the implementation of three adversarial example attack methods FGSM, IFGSM, MI-FGSM and one Distillation as defe…☆136Updated 5 years ago
- ☆53Updated 4 years ago
- Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching☆111Updated last year
- CVPR 2021 Official repository for the Data-Free Model Extraction paper. https://arxiv.org/abs/2011.14779☆75Updated last year
- AdvAttacks; adversarial examples; FGSM;JSMA;CW;single pixel attack; local search attack;deepfool☆58Updated 6 years ago
- Using relativism to improve GAN-based Adversarial Attacks. 🦾☆44Updated 2 years ago
- Official Repository for the AAAI-20 paper "Hidden Trigger Backdoor Attacks"☆133Updated 2 years ago
- Pytorch code for ens_adv_train☆16Updated 6 years ago
- This is an implementation demo of the ICLR 2021 paper [Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks…☆127Updated 4 years ago
- ☆57Updated 2 years ago
- ☆19Updated 3 years ago
- Example of the attack described in the paper "Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization"☆21Updated 6 years ago
- Adversarial Robustness, White-box, Adversarial Attack☆52Updated 3 years ago
- Craft poisoned data using MetaPoison☆54Updated 4 years ago
- KNN Defense Against Clean Label Poisoning Attacks☆13Updated 4 years ago
- ConvexPolytopePosioning☆37Updated 6 years ago
- Code for "On Adaptive Attacks to Adversarial Example Defenses"☆88Updated 4 years ago
- ☆88Updated 4 years ago
- Privacy Risks of Securing Machine Learning Models against Adversarial Examples☆46Updated 6 years ago
- Detection of adversarial examples using influence functions and nearest neighbors☆37Updated 3 years ago
- Code for the paper: Label-Only Membership Inference Attacks☆67Updated 4 years ago
- Code for "On the Trade-off between Adversarial and Backdoor Robustness" (NIPS 2020)☆17Updated 5 years ago
- Implementation of Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning paper☆21Updated 5 years ago
- ☆42Updated 2 years ago