ginevracoal / robustBNNsLinks
Code for paper "Robustness of Bayesian Neural Networks to Gradient-Based Attacks"
☆17Updated last year
Alternatives and similar repositories for robustBNNs
Users that are interested in robustBNNs are comparing it to the libraries listed below
Sorting:
- Code and data for the ICLR 2021 paper "Perceptual Adversarial Robustness: Defense Against Unseen Threat Models".☆55Updated 3 years ago
- Official implementation for Training Certifiably Robust Neural Networks with Efficient Local Lipschitz Bounds (NeurIPS, 2021).☆24Updated 2 years ago
- Source code for "Neural Anisotropy Directions"☆16Updated 4 years ago
- ☆158Updated 4 years ago
- RayS: A Ray Searching Method for Hard-label Adversarial Attack (KDD2020)☆56Updated 4 years ago
- ☆23Updated last year
- PyTorch implementations of Adversarial defenses and utils.☆34Updated last year
- Sparse-RS: a versatile framework for query-efficient sparse black-box adversarial attacks☆44Updated 3 years ago
- Implementation of Confidence-Calibrated Adversarial Training (CCAT).☆45Updated 5 years ago
- On the effectiveness of adversarial training against common corruptions [UAI 2022]☆30Updated 3 years ago
- Understanding and Improving Fast Adversarial Training [NeurIPS 2020]☆95Updated 3 years ago
- ☆23Updated 3 years ago
- Implementation of Wasserstein adversarial attacks.☆23Updated 4 years ago
- Semisupervised learning for adversarial robustness https://arxiv.org/pdf/1905.13736.pdf☆142Updated 5 years ago
- Feature Scattering Adversarial Training (NeurIPS19)☆73Updated last year
- [ICML'20] Multi Steepest Descent (MSD) for robustness against the union of multiple perturbation models.☆26Updated last year
- Unofficial implementation of the DeepMind papers "Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples…☆97Updated 3 years ago
- Fighting Gradients with Gradients: Dynamic Defenses against Adversarial Attacks☆39Updated 4 years ago
- Implemented CURE algorithm from robustness via curvature regularization and vice versa☆31Updated 2 years ago
- Attacks Which Do Not Kill Training Make Adversarial Learning Stronger (ICML2020 Paper)☆125Updated last year
- [ICLR 2021] "Robust Overfitting may be mitigated by properly learned smoothening" by Tianlong Chen*, Zhenyu Zhang*, Sijia Liu, Shiyu Chan…☆47Updated 3 years ago
- Code for Stability Training with Noise (STN)☆22Updated 4 years ago
- Code for "On Adaptive Attacks to Adversarial Example Defenses"☆87Updated 4 years ago
- Code for the paper "(De)Randomized Smoothing for Certifiable Defense against Patch Attacks" by Alexander Levine and Soheil Feizi.☆17Updated 2 years ago
- Code for ICLR2020 "Improving Adversarial Robustness Requires Revisiting Misclassified Examples"☆151Updated 4 years ago
- the paper "Geometry-aware Instance-reweighted Adversarial Training" ICLR 2021 oral☆59Updated 4 years ago
- A unified benchmark problem for data poisoning attacks☆156Updated last year
- Code for the paper "Adversarial Training and Robustness for Multiple Perturbations", NeurIPS 2019☆47Updated 2 years ago
- Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching☆108Updated 11 months ago
- Provable adversarial robustness at ImageNet scale☆395Updated 6 years ago