jhayes14 / black-box-attacksLinks
Comparison of gradient estimation techniques for black-box adversarial examples
☆11Updated 6 years ago
Alternatives and similar repositories for black-box-attacks
Users that are interested in black-box-attacks are comparing it to the libraries listed below
Sorting:
- Provable Robustness of ReLU networks via Maximization of Linear Regions [AISTATS 2019]☆32Updated 5 years ago
- Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation [NeurIPS 2017]☆18Updated 7 years ago
- Investigating the robustness of state-of-the-art CNN architectures to simple spatial transformations.☆49Updated 5 years ago
- Learning perturbation sets for robust machine learning☆65Updated 3 years ago
- Code release for the ICML 2019 paper "Are generative classifiers more robust to adversarial attacks?"☆23Updated 6 years ago
- Codebase for "Exploring the Landscape of Spatial Robustness" (ICML'19, https://arxiv.org/abs/1712.02779).☆26Updated 5 years ago
- A Closer Look at Accuracy vs. Robustness☆89Updated 4 years ago
- A powerful white-box adversarial attack that exploits knowledge about the geometry of neural networks to find minimal adversarial perturb…☆12Updated 4 years ago
- [ICML'20] Multi Steepest Descent (MSD) for robustness against the union of multiple perturbation models.☆26Updated 11 months ago
- Randomized Smoothing of All Shapes and Sizes (ICML 2020).☆52Updated 4 years ago
- ☆88Updated 11 months ago
- Pre-Training Buys Better Robustness and Uncertainty Estimates (ICML 2019)☆100Updated 3 years ago
- Code for the paper "Adversarial Training and Robustness for Multiple Perturbations", NeurIPS 2019☆47Updated 2 years ago
- Implementation of Confidence-Calibrated Adversarial Training (CCAT).☆45Updated 4 years ago
- ☆29Updated 6 years ago
- CVPR'19 experiments with (on-manifold) adversarial examples.☆45Updated 5 years ago
- ☆38Updated 4 years ago
- Provably defending pretrained classifiers including the Azure, Google, AWS, and Clarifai APIs☆97Updated 4 years ago
- This code reproduces the results of the paper, "Measuring Data Leakage in Machine-Learning Models with Fisher Information"☆50Updated 3 years ago
- Code for the CVPR 2021 paper: Understanding Failures of Deep Networks via Robust Feature Extraction☆36Updated 3 years ago
- Code for 'One Neuron to Fool Them All'☆8Updated 4 years ago
- Code for Stability Training with Noise (STN)☆22Updated 4 years ago
- Code for "Testing Robustness Against Unforeseen Adversaries"☆81Updated 11 months ago
- Benchmark for LP-relaxed robustness verification of ReLU-networks☆41Updated 6 years ago
- On the effectiveness of adversarial training against common corruptions [UAI 2022]☆30Updated 3 years ago
- Analysis of Adversarial Logit Pairing☆60Updated 6 years ago
- Official repo for the paper "Make Some Noise: Reliable and Efficient Single-Step Adversarial Training" (https://arxiv.org/abs/2202.01181)☆25Updated 2 years ago
- Logit Pairing Methods Can Fool Gradient-Based Attacks [NeurIPS 2018 Workshop on Security in Machine Learning]☆19Updated 6 years ago
- ☆55Updated 4 years ago
- ☆21Updated 11 months ago