boschresearch / meta-adversarial-training
Tensorflow implementation of Meta Adversarial Training for Adversarial Patch Attacks on Tiny ImageNet.
☆25Updated 4 years ago
Alternatives and similar repositories for meta-adversarial-training:
Users that are interested in meta-adversarial-training are comparing it to the libraries listed below
- Official repo for the paper "Make Some Noise: Reliable and Efficient Single-Step Adversarial Training" (https://arxiv.org/abs/2202.01181)☆25Updated 2 years ago
- Code relative to "Adversarial robustness against multiple and single $l_p$-threat models via quick fine-tuning of robust classifiers"☆18Updated 2 years ago
- Code for Stability Training with Noise (STN)☆21Updated 4 years ago
- On the effectiveness of adversarial training against common corruptions [UAI 2022]☆30Updated 2 years ago
- Implementation of Confidence-Calibrated Adversarial Training (CCAT).☆45Updated 4 years ago
- Code for the paper "MMA Training: Direct Input Space Margin Maximization through Adversarial Training"☆34Updated 4 years ago
- Code for the paper "Adversarial Training and Robustness for Multiple Perturbations", NeurIPS 2019☆47Updated 2 years ago
- Source code of "Hold me tight! Influence of discriminative features on deep network boundaries"☆22Updated 3 years ago
- Code and data for the ICLR 2021 paper "Perceptual Adversarial Robustness: Defense Against Unseen Threat Models".☆55Updated 3 years ago
- Coupling rejection strategy against adversarial attacks (CVPR 2022)☆28Updated 2 years ago
- Fighting Gradients with Gradients: Dynamic Defenses against Adversarial Attacks☆38Updated 3 years ago
- Certified Patch Robustness via Smoothed Vision Transformers☆42Updated 3 years ago
- ☆35Updated 4 years ago
- PyTorch implementations of Adversarial defenses and utils.☆34Updated last year
- Code release for the ICML 2019 paper "Are generative classifiers more robust to adversarial attacks?"☆23Updated 5 years ago
- Pytorch implementation of Adversarially Robust Distillation (ARD)☆59Updated 5 years ago
- Helper-based Adversarial Training: Reducing Excessive Margin to Achieve a Better Accuracy vs. Robustness Trade-off☆29Updated 2 years ago
- Code for paper "Poisoned classifiers are not only backdoored, they are fundamentally broken"☆26Updated 3 years ago
- ☆47Updated 4 years ago
- ☆39Updated last year
- Provably defending pretrained classifiers including the Azure, Google, AWS, and Clarifai APIs☆96Updated 3 years ago
- Code for the paper "SmoothMix: Training Confidence-calibrated Smoothed Classifiers for Certified Robustness" (NeurIPS 2021)☆21Updated 2 years ago
- Guided Adversarial Attack for Evaluating and Enhancing Adversarial Defenses, NeurIPS Spotlight 2020☆26Updated 4 years ago
- ☆19Updated 3 years ago
- ☆13Updated 4 years ago
- [NeurIPS 2021] Fast Certified Robust Training with Short Warmup☆23Updated last year
- Code for CVPR2020 paper QEBA: Query-Efficient Boundary-Based Blackbox Attack☆30Updated 4 years ago
- CROWN: A Neural Network Verification Framework for Networks with General Activation Functions☆38Updated 6 years ago
- A Closer Look at Accuracy vs. Robustness☆88Updated 3 years ago
- Understanding and Improving Fast Adversarial Training [NeurIPS 2020]☆95Updated 3 years ago