safreita1 / unmaskLinks
Adversarial detection and defense for deep learning systems using robust feature alignment
☆18Updated 5 years ago
Alternatives and similar repositories for unmask
Users that are interested in unmask are comparing it to the libraries listed below
Sorting:
- Codes for ICCV 2021 paper "AGKD-BML: Defense Against Adversarial Attack by Attention Guided Knowledge Distillation and Bi-directional Met…☆12Updated 3 years ago
- Detection of adversarial examples using influence functions and nearest neighbors☆37Updated 3 years ago
- ☆26Updated 7 years ago
- ☆53Updated 4 years ago
- Code for "On Adaptive Attacks to Adversarial Example Defenses"☆88Updated 4 years ago
- Codes for reproducing the experimental results in "Proper Network Interpretability Helps Adversarial Robustness in Classification", publi…☆13Updated 5 years ago
- KNN Defense Against Clean Label Poisoning Attacks☆13Updated 4 years ago
- ConvexPolytopePosioning☆37Updated 6 years ago
- ATTA (Efficient Adversarial Training with Transferable Adversarial Examples)☆37Updated 5 years ago
- ☆16Updated 6 years ago
- Code for our NeurIPS 2020 paper Backpropagating Linearly Improves Transferability of Adversarial Examples.☆42Updated 2 years ago
- Learnable Boundary Guided Adversarial Training (ICCV2021)☆38Updated last year
- ☆11Updated 2 years ago
- Implementation of Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning paper☆21Updated 5 years ago
- ☆19Updated 3 years ago
- Understanding Catastrophic Overfitting in Single-step Adversarial Training [AAAI 2021]☆28Updated 3 years ago
- ☆11Updated 6 years ago
- This repository contains the official PyTorch implementation of GeoDA algorithm. GeoDA is a Black-box attack to generate adversarial exam…☆35Updated 4 years ago
- This is the code for semi-supervised robust training (SRT).☆18Updated 2 years ago
- RAB: Provable Robustness Against Backdoor Attacks☆39Updated 2 years ago
- ☆58Updated 3 years ago
- Craft poisoned data using MetaPoison☆54Updated 4 years ago
- ☆42Updated 2 years ago
- Universal Adversarial Perturbations (UAPs) for PyTorch☆49Updated 4 years ago
- Code and experiments for the adversarial detection paper☆21Updated 4 years ago
- [ICLR 2023, Spotlight] Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning☆33Updated 2 years ago
- Code for CVPR2020 paper QEBA: Query-Efficient Boundary-Based Blackbox Attack☆33Updated 4 years ago
- Attacking a dog vs fish classification that uses transfer learning inceptionV3☆74Updated 7 years ago
- A minimal PyTorch implementation of Label-Consistent Backdoor Attacks☆29Updated 4 years ago
- [ICLR'21] Dataset Inference for Ownership Resolution in Machine Learning☆32Updated 3 years ago