um-dsp / Morphence
Morphence: An implementation of a moving target defense against adversarial example attacks demonstrated for image classification models trained on MNIST and CIFAR10.
☆22Updated 6 months ago
Alternatives and similar repositories for Morphence:
Users that are interested in Morphence are comparing it to the libraries listed below
- Foolbox implementation for NeurIPS 2021 Paper: "Fast Minimum-norm Adversarial Attacks through Adaptive Norm Constraints".☆25Updated 2 years ago
- Code for paper "PatchGuard: A Provably Robust Defense against Adversarial Patches via Small Receptive Fields and Masking"☆64Updated 2 years ago
- ☆64Updated 4 years ago
- Example of the attack described in the paper "Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization"☆21Updated 5 years ago
- Is RobustBench/AutoAttack a suitable Benchmark for Adversarial Robustness?☆11Updated 2 years ago
- Code for "On the Trade-off between Adversarial and Backdoor Robustness" (NIPS 2020)☆17Updated 4 years ago
- The code is for our NeurIPS 2019 paper: https://arxiv.org/abs/1910.04749☆33Updated 4 years ago
- This is for releasing the source code of the ACSAC paper "STRIP: A Defence Against Trojan Attacks on Deep Neural Networks"☆54Updated 3 months ago
- Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks (ICLR '20)☆29Updated 4 years ago
- CVPR 2021 Official repository for the Data-Free Model Extraction paper. https://arxiv.org/abs/2011.14779☆71Updated 11 months ago
- ☆40Updated last year
- Code Repository for the Paper ---Revisiting the Assumption of Latent Separability for Backdoor Defenses (ICLR 2023)☆39Updated 2 years ago
- ☆92Updated 4 years ago
- Bullseye Polytope Clean-Label Poisoning Attack☆14Updated 4 years ago
- Official Repository for the AAAI-20 paper "Hidden Trigger Backdoor Attacks"☆123Updated last year
- Watermarking against model extraction attacks in MLaaS. ACM MM 2021.☆33Updated 3 years ago
- Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks (RAID 2018)☆47Updated 6 years ago
- ☆13Updated 2 years ago
- Code for "CloudLeak: Large-Scale Deep Learning Models Stealing Through Adversarial Examples" (NDSS 2020)☆20Updated 4 years ago
- Code for ML Doctor☆86Updated 6 months ago
- ☆10Updated 3 years ago
- An evaluation framework for mitigating DNN backdoor attacks using data augmentations☆9Updated 4 years ago
- SaTML 2023, 1st place in CVPR’21 Security AI Challenger: Unrestricted Adversarial Attacks on ImageNet.☆25Updated 2 years ago
- 复现了下Neural Cleanse这篇论文,真的是简单而有效,发在了okaland☆30Updated 3 years ago
- Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching☆98Updated 6 months ago
- [CVPRW'22] A privacy attack that exploits Adversarial Training models to compromise the privacy of Federated Learning systems.☆12Updated 2 years ago
- ☆23Updated 3 years ago
- ☆44Updated 4 years ago
- Implementation of the paper "MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation".☆32Updated 3 years ago
- ☆23Updated 2 years ago