Hyperparticle / one-pixel-attack-keras
Keras implementation of "One pixel attack for fooling deep neural networks" using differential evolution on Cifar10 and ImageNet
☆1,228Updated last year
Alternatives and similar repositories for one-pixel-attack-keras
Users that are interested in one-pixel-attack-keras are comparing it to the libraries listed below
Sorting:
- Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples☆891Updated last year
- Contest Proposal and infrastructure for the Unrestricted Adversarial Examples Challenge☆331Updated 4 years ago
- Tensorflow code for the Bayesian GAN (https://arxiv.org/abs/1705.09558) (NIPS 2017)☆1,016Updated 6 years ago
- Implementation of Papers on Adversarial Examples☆396Updated 2 years ago
- Crafting adversarial images☆223Updated 6 years ago
- A simple and accurate method to fool deep neural networks☆363Updated 5 years ago
- Neural network visualization toolkit for keras☆2,989Updated 3 years ago
- Model extraction attacks on Machine-Learning-as-a-Service platforms.☆349Updated 4 years ago
- Fader Networks: Manipulating Images by Sliding Attributes - NIPS 2017☆762Updated 3 years ago
- A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX☆2,859Updated last year
- A challenge to explore adversarial robustness of neural networks on MNIST.☆751Updated 3 years ago
- ImageNet classifier with state-of-the-art adversarial robustness☆684Updated 5 years ago
- Pytorch implementation of convolutional neural network adversarial attack techniques☆357Updated 6 years ago
- Robust evasion attacks against neural network to find adversarial examples☆827Updated 3 years ago
- Countering Adversarial Image using Input Transformations.☆497Updated 3 years ago
- ☆245Updated 6 years ago
- Tutorials and implementations for "Self-normalizing networks"☆1,585Updated 3 years ago
- A curated list of awesome resources for adversarial examples in deep learning☆264Updated 4 years ago
- Repo of simple adversarial examples on vanilla neural networks trained on MNIST☆121Updated 5 years ago
- A challenge to explore adversarial robustness of neural networks on CIFAR10.☆494Updated 3 years ago
- Black-Box Adversarial Attack on Public Face Recognition Systems☆410Updated 3 years ago
- A collection of infrastructure and tools for research in neural network interpretability.☆4,693Updated 2 years ago
- Various tutorials given for welcoming new students at MILA.☆985Updated 6 years ago
- 🏖 Keras Implementation of Painting outside the box☆1,146Updated 2 years ago
- A CNN visualizer☆1,002Updated 7 years ago
- Generating faces with deconvolution networks☆890Updated 3 years ago
- Benchmarks for popular CNN models☆2,531Updated 7 years ago
- No dependency caffe replacement☆336Updated 7 years ago
- Advbox is a toolbox to generate adversarial examples that fool neural networks in PaddlePaddle、PyTorch、Caffe2、MxNet、Keras、TensorFlow and …☆1,392Updated 2 years ago
- A Toolbox for Adversarial Robustness Research☆1,336Updated last year