Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
☆908Jun 10, 2023Updated 2 years ago
Alternatives and similar repositories for obfuscated-gradients
Users that are interested in obfuscated-gradients are comparing it to the libraries listed below
Sorting:
- Robust evasion attacks against neural network to find adversarial examples☆860Jun 1, 2021Updated 4 years ago
- ImageNet classifier with state-of-the-art adversarial robustness☆684Dec 31, 2019Updated 6 years ago
- A challenge to explore adversarial robustness of neural networks on MNIST.☆759May 3, 2022Updated 3 years ago
- A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX☆2,946Dec 3, 2025Updated 3 months ago
- An adversarial example library for constructing attacks, building defenses, and benchmarking both☆6,425Apr 10, 2024Updated last year
- LaTeX source for the paper "On Evaluating Adversarial Robustness"☆260Apr 16, 2021Updated 4 years ago
- A challenge to explore adversarial robustness of neural networks on CIFAR10.☆507Aug 30, 2021Updated 4 years ago
- A Toolbox for Adversarial Robustness Research☆1,365Sep 14, 2023Updated 2 years ago
- TRADES (TRadeoff-inspired Adversarial DEfense via Surrogate-loss minimization)☆554Mar 30, 2023Updated 2 years ago
- Countering Adversarial Image using Input Transformations.☆497Sep 29, 2021Updated 4 years ago
- MagNet: a Two-Pronged Defense against Adversarial Examples☆102Oct 13, 2018Updated 7 years ago
- Code for the CVPR 2019 article "Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses"☆137Nov 25, 2020Updated 5 years ago
- Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models (published in ICLR2018)☆247Oct 24, 2019Updated 6 years ago
- A method for training neural networks that are provably robust to adversarial attacks.☆391Feb 16, 2022Updated 4 years ago
- Ensemble Adversarial Training on MNIST☆122Jun 20, 2017Updated 8 years ago
- Benchmarking and Visualization Tool for Adversarial Machine Learning☆188Apr 4, 2023Updated 2 years ago
- Crafting adversarial images☆222Jan 3, 2019Updated 7 years ago
- The winning submission for NIPS 2017: Defense Against Adversarial Attack of team TSAIL☆238Mar 27, 2018Updated 7 years ago
- Implementation of Papers on Adversarial Examples☆397Apr 24, 2023Updated 2 years ago
- Contest Proposal and infrastructure for the Unrestricted Adversarial Examples Challenge☆334Sep 17, 2020Updated 5 years ago
- Datasets for the paper "Adversarial Examples are not Bugs, They Are Features"☆187Sep 17, 2020Updated 5 years ago
- Code for "Detecting Adversarial Samples from Artifacts" (Feinman et al., 2017)☆111Feb 14, 2018Updated 8 years ago
- Mitigating Adversarial Effects Through Randomization☆120Mar 20, 2018Updated 8 years ago
- Analysis of Adversarial Logit Pairing☆61Aug 13, 2018Updated 7 years ago
- Tensorflow Implementation of Adversarial Attack to Capsule Networks☆173Nov 9, 2017Updated 8 years ago
- AAAI 2019 oral presentation☆53May 30, 2025Updated 9 months ago
- Adversarial Defense by Restricting the Hidden Space of Deep Neural Networks, in ICCV 2019☆58Oct 24, 2019Updated 6 years ago
- A non-targeted adversarial attack method, which won the first place in NIPS 2017 non-targeted adversarial attacks competition☆253Oct 30, 2019Updated 6 years ago
- A library for experimenting with, training and evaluating neural networks, with a focus on adversarial robustness.☆944Jan 11, 2024Updated 2 years ago
- Provable adversarial robustness at ImageNet scale☆407May 20, 2019Updated 6 years ago
- Code relative to "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks"☆743May 16, 2024Updated last year
- ☆48Feb 9, 2021Updated 5 years ago
- Code for "Robustness May Be at Odds with Accuracy"☆91Mar 24, 2023Updated 2 years ago
- A simple implement of an Adversarial Autoencoding ATN(AAE ATN)☆30Jun 9, 2017Updated 8 years ago
- Improving Transferability of Adversarial Examples with Input Diversity☆168Apr 30, 2019Updated 6 years ago
- Code for our nips19 paper: You Only Propagate Once: Accelerating Adversarial Training Via Maximal Principle☆181Jul 25, 2024Updated last year
- Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and…☆5,893Dec 12, 2025Updated 3 months ago
- Detecting Adversarial Examples in Deep Neural Networks☆69Mar 19, 2018Updated 8 years ago
- A tensorflow implementation and improvement of CVPR 2019 paper 'ComDefend'.☆15Apr 13, 2020Updated 5 years ago