safreita1 / unmaskLinks
Adversarial detection and defense for deep learning systems using robust feature alignment
☆16Updated 4 years ago
Alternatives and similar repositories for unmask
Users that are interested in unmask are comparing it to the libraries listed below
Sorting:
- ☆25Updated 6 years ago
- ☆51Updated 3 years ago
- ☆11Updated 2 years ago
- Sparse and Imperceivable Adversarial Attacks (accepted to ICCV 2019).☆40Updated 4 years ago
- Codes for reproducing the experimental results in "Proper Network Interpretability Helps Adversarial Robustness in Classification", publi…☆13Updated 4 years ago
- Implementation of Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning paper☆20Updated 5 years ago
- This is the code for semi-supervised robust training (SRT).☆18Updated 2 years ago
- Code for our NeurIPS 2020 paper Backpropagating Linearly Improves Transferability of Adversarial Examples.☆42Updated 2 years ago
- Understanding Catastrophic Overfitting in Single-step Adversarial Training [AAAI 2021]☆27Updated 2 years ago
- Detection of adversarial examples using influence functions and nearest neighbors☆36Updated 2 years ago
- Craft poisoned data using MetaPoison☆51Updated 4 years ago
- ConvexPolytopePosioning☆35Updated 5 years ago
- Universal Adversarial Perturbations (UAPs) for PyTorch☆48Updated 3 years ago
- ☆11Updated 5 years ago
- ☆19Updated 3 years ago
- ☆19Updated 3 years ago
- Codes for ICCV 2021 paper "AGKD-BML: Defense Against Adversarial Attack by Attention Guided Knowledge Distillation and Bi-directional Met…☆12Updated 3 years ago
- Codes for CVPR2020 paper "Towards Transferable Targeted Attack".☆15Updated 3 years ago
- [Machine Learning 2023] Imbalanced Gradients: A Subtle Cause of Overestimated Adversarial Robustness☆17Updated 11 months ago
- Foolbox implementation for NeurIPS 2021 Paper: "Fast Minimum-norm Adversarial Attacks through Adaptive Norm Constraints".☆25Updated 3 years ago
- Code and data for the ICLR 2021 paper "Perceptual Adversarial Robustness: Defense Against Unseen Threat Models".☆55Updated 3 years ago
- Code for identifying natural backdoors in existing image datasets.☆15Updated 2 years ago
- PyTorch implementation of our ICLR 2023 paper titled "Is Adversarial Training Really a Silver Bullet for Mitigating Data Poisoning?".☆12Updated 2 years ago
- [NeurIPS 2021] Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training☆31Updated 3 years ago
- Repository for Certified Defenses for Adversarial Patch ICLR-2020☆32Updated 4 years ago
- Code for "Learning Universal Adversarial Perturbation by Adversarial Example"☆8Updated 3 years ago
- Code for CVPR2020 paper QEBA: Query-Efficient Boundary-Based Blackbox Attack☆32Updated 4 years ago
- [ICLR2023] Distilling Cognitive Backdoor Patterns within an Image☆35Updated 7 months ago
- RAB: Provable Robustness Against Backdoor Attacks☆38Updated last year
- ☆23Updated last year