karandwivedi42 / adversarialLinks
Pytorch - Adversarial Training
☆26Updated 7 years ago
Alternatives and similar repositories for adversarial
Users that are interested in adversarial are comparing it to the libraries listed below
Sorting:
- Feature Scattering Adversarial Training (NeurIPS19)☆73Updated last year
- Adversarial Defense for Ensemble Models (ICML 2019)☆61Updated 4 years ago
- Further improve robustness of mixup-trained models in inference (ICLR 2020)☆60Updated 4 years ago
- Understanding and Improving Fast Adversarial Training [NeurIPS 2020]☆95Updated 3 years ago
- Code for "Diversity can be Transferred: Output Diversification for White- and Black-box Attacks"☆53Updated 4 years ago
- Code for FAB-attack☆33Updated 4 years ago
- Code for Black-Box Adversarial Attack with Transferable Model-based Embedding☆57Updated 5 years ago
- [NeurIPS'20 Oral] DVERGE: Diversifying Vulnerabilities for Enhanced Robust Generation of Ensembles☆55Updated 3 years ago
- MACER: MAximizing CErtified Radius (ICLR 2020)☆30Updated 5 years ago
- Strongest attack against Feature Scatter and Adversarial Interpolation☆25Updated 5 years ago
- RayS: A Ray Searching Method for Hard-label Adversarial Attack (KDD2020)☆56Updated 4 years ago
- Source code for Learning Transferable Adversarial Examples via Ghost Networks (AAAI2020)☆58Updated 6 years ago
- pytorch implementation of Parametric Noise Injection for adversarial defense☆43Updated 5 years ago
- ☆16Updated 5 years ago
- [ICLR 2021] "Robust Overfitting may be mitigated by properly learned smoothening" by Tianlong Chen*, Zhenyu Zhang*, Sijia Liu, Shiyu Chan…☆47Updated 3 years ago
- Code for "Prior Convictions: Black-Box Adversarial Attacks with Bandits and Priors"☆64Updated 5 years ago
- ☆48Updated 4 years ago
- ☆45Updated 5 years ago
- [ICCV 2019] Enhancing Adversarial Example Transferability with an Intermediate Level Attack (https://arxiv.org/abs/1907.10823)☆78Updated 5 years ago
- ☆35Updated 4 years ago
- StrAttack, ICLR 2019☆33Updated 5 years ago
- Ensemble Adversarial Training on MNIST with pytorch☆20Updated 6 years ago
- Imbalanced Gradients: A New Cause of Overestimated Adversarial Robustness. (MD attacks)☆11Updated 4 years ago
- Codebase for "Exploring the Landscape of Spatial Robustness" (ICML'19, https://arxiv.org/abs/1712.02779).☆26Updated 5 years ago
- Adversarial Distributional Training (NeurIPS 2020)☆63Updated 4 years ago
- Code for the paper "Adversarial Training and Robustness for Multiple Perturbations", NeurIPS 2019☆47Updated 2 years ago
- ☆58Updated 2 years ago
- ☆87Updated 10 months ago
- Blackbox attacks for deep neural network models☆70Updated 6 years ago
- Code for our NeurIPS 2020 paper Backpropagating Linearly Improves Transferability of Adversarial Examples.☆42Updated 2 years ago