um-dsp / Morphence
Morphence: An implementation of a moving target defense against adversarial example attacks demonstrated for image classification models trained on MNIST and CIFAR10.
☆15Updated last month
Related projects: ⓘ
- Example of the attack described in the paper "Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization"☆22Updated 4 years ago
- Foolbox implementation for NeurIPS 2021 Paper: "Fast Minimum-norm Adversarial Attacks through Adaptive Norm Constraints".☆25Updated 2 years ago
- ☆60Updated 3 years ago
- Repository for Certified Defenses for Adversarial Patch ICLR-2020☆32Updated 4 years ago
- The code is for our NeurIPS 2019 paper: https://arxiv.org/abs/1910.04749☆31Updated 4 years ago
- Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples☆17Updated 2 years ago
- Code for paper "PatchGuard: A Provably Robust Defense against Adversarial Patches via Small Receptive Fields and Masking"☆62Updated 2 years ago
- A united toolbox for running major robustness verification approaches for DNNs. [S&P 2023]☆87Updated last year
- Honest-but-Curious Nets: Sensitive Attributes of Private Inputs Can Be Secretly Coded into the Classifiers' Outputs (ACM CCS'21)☆17Updated last year
- ☆23Updated 2 years ago
- ☆88Updated 3 years ago
- Code Repository for the Paper ---Revisiting the Assumption of Latent Separability for Backdoor Defenses (ICLR 2023)☆32Updated last year
- Is RobustBench/AutoAttack a suitable Benchmark for Adversarial Robustness?☆11Updated 2 years ago
- [ICLR'21] Dataset Inference for Ownership Resolution in Machine Learning☆30Updated last year
- Code for paper "Poisoned classifiers are not only backdoored, they are fundamentally broken"☆24Updated 2 years ago
- Unofficial pytorch implementation of paper: Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures☆24Updated 11 months ago
- ☆25Updated 5 years ago
- Code for ML Doctor☆84Updated last month
- Bullseye Polytope Clean-Label Poisoning Attack☆14Updated 3 years ago
- code release for "Unrolling SGD: Understanding Factors Influencing Machine Unlearning" published at EuroS&P'22☆22Updated 2 years ago
- SaTML 2023, 1st place in CVPR’21 Security AI Challenger: Unrestricted Adversarial Attacks on ImageNet.☆23Updated last year
- Code for "On the Trade-off between Adversarial and Backdoor Robustness" (NIPS 2020)☆16Updated 3 years ago
- Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks (ICLR '20)☆29Updated 3 years ago
- CVPR 2021 Official repository for the Data-Free Model Extraction paper. https://arxiv.org/abs/2011.14779☆66Updated 5 months ago
- Library for training globally-robust neural networks.☆28Updated last year
- Defending against Model Stealing via Verifying Embedded External Features☆31Updated 2 years ago
- ☆25Updated last year
- Code for the paper: Adversarial Training Against Location-Optimized Adversarial Patches. ECCV-W 2020.☆41Updated 11 months ago
- Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching☆91Updated last month
- ☆31Updated 2 weeks ago