anishathalye / obfuscated-gradientsView external linksLinks
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
☆906Jun 10, 2023Updated 2 years ago
Alternatives and similar repositories for obfuscated-gradients
Users that are interested in obfuscated-gradients are comparing it to the libraries listed below
Sorting:
- Robust evasion attacks against neural network to find adversarial examples☆857Jun 1, 2021Updated 4 years ago
- ImageNet classifier with state-of-the-art adversarial robustness☆686Dec 31, 2019Updated 6 years ago
- A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX☆2,938Dec 3, 2025Updated 2 months ago
- A challenge to explore adversarial robustness of neural networks on MNIST.☆758May 3, 2022Updated 3 years ago
- An adversarial example library for constructing attacks, building defenses, and benchmarking both☆6,410Apr 10, 2024Updated last year
- LaTeX source for the paper "On Evaluating Adversarial Robustness"☆259Apr 16, 2021Updated 4 years ago
- TRADES (TRadeoff-inspired Adversarial DEfense via Surrogate-loss minimization)☆552Mar 30, 2023Updated 2 years ago
- A Toolbox for Adversarial Robustness Research☆1,363Sep 14, 2023Updated 2 years ago
- A challenge to explore adversarial robustness of neural networks on CIFAR10.☆504Aug 30, 2021Updated 4 years ago
- Countering Adversarial Image using Input Transformations.☆498Sep 29, 2021Updated 4 years ago
- A method for training neural networks that are provably robust to adversarial attacks.☆390Feb 16, 2022Updated 4 years ago
- Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models (published in ICLR2018)☆246Oct 24, 2019Updated 6 years ago
- Code for the CVPR 2019 article "Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses"☆137Nov 25, 2020Updated 5 years ago
- MagNet: a Two-Pronged Defense against Adversarial Examples☆101Oct 13, 2018Updated 7 years ago
- The winning submission for NIPS 2017: Defense Against Adversarial Attack of team TSAIL☆237Mar 27, 2018Updated 7 years ago
- Ensemble Adversarial Training on MNIST☆122Jun 20, 2017Updated 8 years ago
- Crafting adversarial images☆222Jan 3, 2019Updated 7 years ago
- Benchmarking and Visualization Tool for Adversarial Machine Learning☆188Apr 4, 2023Updated 2 years ago
- Code for "Detecting Adversarial Samples from Artifacts" (Feinman et al., 2017)☆111Feb 14, 2018Updated 8 years ago
- Implementation of Papers on Adversarial Examples☆397Apr 24, 2023Updated 2 years ago
- Datasets for the paper "Adversarial Examples are not Bugs, They Are Features"☆187Sep 17, 2020Updated 5 years ago
- Contest Proposal and infrastructure for the Unrestricted Adversarial Examples Challenge☆333Sep 17, 2020Updated 5 years ago
- A library for experimenting with, training and evaluating neural networks, with a focus on adversarial robustness.☆944Jan 11, 2024Updated 2 years ago
- Provable adversarial robustness at ImageNet scale☆404May 20, 2019Updated 6 years ago
- Analysis of Adversarial Logit Pairing☆60Aug 13, 2018Updated 7 years ago
- Tensorflow Implementation of Adversarial Attack to Capsule Networks☆173Nov 9, 2017Updated 8 years ago
- Code relative to "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks"☆737May 16, 2024Updated last year
- Code for our NeurIPS 2019 *spotlight* "Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers"☆228Nov 9, 2019Updated 6 years ago
- A non-targeted adversarial attack method, which won the first place in NIPS 2017 non-targeted adversarial attacks competition☆252Oct 30, 2019Updated 6 years ago
- Mitigating Adversarial Effects Through Randomization☆120Mar 20, 2018Updated 7 years ago
- ☆18Sep 25, 2019Updated 6 years ago
- Adversarial Defense by Restricting the Hidden Space of Deep Neural Networks, in ICCV 2019☆58Oct 24, 2019Updated 6 years ago
- Code for "Robustness May Be at Odds with Accuracy"☆91Mar 24, 2023Updated 2 years ago
- Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and…☆5,821Dec 12, 2025Updated 2 months ago
- ☆219May 23, 2018Updated 7 years ago
- Code for our nips19 paper: You Only Propagate Once: Accelerating Adversarial Training Via Maximal Principle☆179Jul 25, 2024Updated last year
- AAAI 2019 oral presentation☆53May 30, 2025Updated 8 months ago
- Generating Natural Adversarial Examples, ICLR 2018☆142May 17, 2018Updated 7 years ago
- A simple implement of an Adversarial Autoencoding ATN(AAE ATN)☆30Jun 9, 2017Updated 8 years ago