PKUAI26 / Bayesian-Adversarial-LearningLinks
☆20Updated 6 years ago
Alternatives and similar repositories for Bayesian-Adversarial-Learning
Users that are interested in Bayesian-Adversarial-Learning are comparing it to the libraries listed below
Sorting:
- [NeurIPS2020] The official repository of "AdvFlow: Inconspicuous Black-box Adversarial Attacks using Normalizing Flows".☆48Updated 2 years ago
- Codebase for "Exploring the Landscape of Spatial Robustness" (ICML'19, https://arxiv.org/abs/1712.02779).☆26Updated 6 years ago
- Repository for our ICCV 2019 paper: Adversarial Defense via Learning to Generate Diverse Attacks☆22Updated 4 years ago
- Certifying Some Distributional Robustness with Principled Adversarial Training (https://arxiv.org/abs/1710.10571)☆45Updated 7 years ago
- Targeted black-box adversarial attack using Bayesian Optimization☆37Updated 5 years ago
- Generative Model for Neural Networks☆24Updated 5 years ago
- ☆88Updated last year
- Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network☆62Updated 6 years ago
- [ICML 2019] ME-Net: Towards Effective Adversarial Robustness with Matrix Estimation☆54Updated 4 months ago
- Project page for our paper: Interpreting Adversarially Trained Convolutional Neural Networks☆66Updated 6 years ago
- Public code for a paper "Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks."☆34Updated 6 years ago
- A Closer Look at Accuracy vs. Robustness☆88Updated 4 years ago
- Pre-Training Buys Better Robustness and Uncertainty Estimates (ICML 2019)☆100Updated 3 years ago
- Official implementation for paper: A New Defense Against Adversarial Images: Turning a Weakness into a Strength☆38Updated 5 years ago
- Distributional and Outlier Robust Optimization (ICML 2021)☆27Updated 4 years ago
- CVPR'19 experiments with (on-manifold) adversarial examples.☆45Updated 5 years ago
- Code for "Learning Perceptually-Aligned Representations via Adversarial Robustness"☆162Updated 5 years ago
- Logit Pairing Methods Can Fool Gradient-Based Attacks [NeurIPS 2018 Workshop on Security in Machine Learning]☆19Updated 6 years ago
- StrAttack, ICLR 2019☆33Updated 6 years ago
- Semisupervised learning for adversarial robustness https://arxiv.org/pdf/1905.13736.pdf☆142Updated 5 years ago
- Learning perturbation sets for robust machine learning☆65Updated 4 years ago
- [NeurIPS 2020] "Once-for-All Adversarial Training: In-Situ Tradeoff between Robustness and Accuracy for Free" by Haotao Wang*, Tianlong C…☆44Updated 3 years ago
- ☆91Updated 3 years ago
- Code for AAAI 2018 accepted paper: "Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing the…☆55Updated 2 years ago
- An Algorithm to Quantify Robustness of Recurrent Neural Networks☆49Updated 5 years ago
- ☆26Updated 6 years ago
- Code for Stability Training with Noise (STN)☆22Updated 4 years ago
- Pytorch - Adversarial Training☆26Updated 7 years ago
- Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation [NeurIPS 2017]☆18Updated 7 years ago
- Code for the paper "Adversarial Training and Robustness for Multiple Perturbations", NeurIPS 2019☆47Updated 2 years ago