Interfaces for defining Robust ML models and precisely specifying the threat models under which they claim to be secure.
☆62May 30, 2019Updated 6 years ago
Alternatives and similar repositories for robustml
Users that are interested in robustml are comparing it to the libraries listed below
Sorting:
- Logit Pairing Methods Can Fool Gradient-Based Attacks [NeurIPS 2018 Workshop on Security in Machine Learning]☆19Dec 2, 2018Updated 7 years ago
- Adversarially Robust Neural Network on MNIST.☆63Feb 4, 2022Updated 4 years ago
- RayS: A Ray Searching Method for Hard-label Adversarial Attack (KDD2020)☆57Nov 5, 2020Updated 5 years ago
- A community-run reference for state-of-the-art adversarial example defenses.☆52Oct 13, 2024Updated last year
- All code for the Piecewise Linear Neural Networks verification: A comparative study paper☆35Nov 7, 2018Updated 7 years ago
- This repository contains a simple implementation of Interval Bound Propagation (IBP) using TensorFlow: https://arxiv.org/abs/1810.12715☆161Dec 20, 2019Updated 6 years ago
- Comparison of gradient estimation techniques for black-box adversarial examples☆11Oct 31, 2018Updated 7 years ago
- Code for our NeurIPS 2019 *spotlight* "Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers"☆227Nov 9, 2019Updated 6 years ago
- A way to achieve uniform confidence far away from the training data.☆38Apr 16, 2021Updated 4 years ago
- A powerful white-box adversarial attack that exploits knowledge about the geometry of neural networks to find minimal adversarial perturb…☆12Aug 5, 2020Updated 5 years ago
- Certified defense to adversarial examples using CROWN and IBP. Also includes GPU implementation of CROWN verification algorithm (in PyTor…☆98Jun 7, 2021Updated 4 years ago
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆44Sep 11, 2023Updated 2 years ago
- Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation [NeurIPS 2017]☆18Apr 8, 2018Updated 7 years ago
- A method for training neural networks that are provably robust to adversarial attacks.☆391Feb 16, 2022Updated 4 years ago
- ☆19Nov 11, 2019Updated 6 years ago
- A challenge to explore adversarial robustness of neural networks on MNIST.☆759May 3, 2022Updated 3 years ago
- LaTeX source for the paper "On Evaluating Adversarial Robustness"☆260Apr 16, 2021Updated 4 years ago
- ☆16Dec 17, 2018Updated 7 years ago
- ☆15Jul 24, 2022Updated 3 years ago
- Contest Proposal and infrastructure for the Unrestricted Adversarial Examples Challenge☆334Sep 17, 2020Updated 5 years ago
- Code for the CVPR 2019 article "Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses"☆137Nov 25, 2020Updated 5 years ago
- Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples☆906Jun 10, 2023Updated 2 years ago
- Provably defending pretrained classifiers including the Azure, Google, AWS, and Clarifai APIs☆100Apr 2, 2021Updated 4 years ago
- Adversarially Robust Generalization Just Requires More Unlabeled Data☆11Aug 8, 2019Updated 6 years ago
- Spatially Transformed Adversarial Examples with TensorFlow☆75Nov 3, 2018Updated 7 years ago
- Code for paper "Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality".☆125Nov 4, 2020Updated 5 years ago
- Provable Worst Case Guarantees for the Detection of Out-of-Distribution Data☆13Sep 20, 2022Updated 3 years ago
- Code for "Diversity can be Transferred: Output Diversification for White- and Black-box Attacks"☆51Nov 2, 2020Updated 5 years ago
- SurFree: a fast surrogate-free black-box attack☆44Jun 27, 2024Updated last year
- A fast sparse attack on deep neural networks.☆51Sep 27, 2020Updated 5 years ago
- Code for "Learning Perceptually-Aligned Representations via Adversarial Robustness"☆164Mar 19, 2020Updated 6 years ago
- ☆19Jun 10, 2024Updated last year
- Source for paper "Attacking Binarized Neural Networks"☆23Mar 23, 2018Updated 7 years ago
- Provable Robustness of ReLU networks via Maximization of Linear Regions [AISTATS 2019]☆31Jul 15, 2020Updated 5 years ago
- Code for "Robustness May Be at Odds with Accuracy"☆90Mar 24, 2023Updated 2 years ago
- TRADES (TRadeoff-inspired Adversarial DEfense via Surrogate-loss minimization)☆552Mar 30, 2023Updated 2 years ago
- CVPR'19 experiments with (on-manifold) adversarial examples.☆43Feb 27, 2020Updated 6 years ago
- [ICLR 2020] A repository for extremely fast adversarial training using FGSM☆449Jul 25, 2024Updated last year
- Data independent universal adversarial perturbations☆63Mar 20, 2020Updated 6 years ago