peikexin9 / deepxplore
DeepXplore code release
☆395Updated 2 years ago
Alternatives and similar repositories for deepxplore:
Users that are interested in deepxplore are comparing it to the libraries listed below
- A library for performing coverage guided fuzzing of neural networks☆209Updated 6 years ago
- Concolic Testing for Deep Neural Networks☆117Updated 3 years ago
- Vision based algorithms for falsification of convolutional neural networks☆12Updated 6 years ago
- ☆44Updated 7 years ago
- A certifiable defense against adversarial examples by training neural networks to be provably robust☆219Updated 5 months ago
- A systematic testing tool for automatically detecting erroneous behaviors of DNN-driven vehicles☆79Updated 5 years ago
- Code release of a paper "Guiding Deep Learning System Testing using Surprise Adequacy"☆46Updated 2 years ago
- Crafting adversarial images☆223Updated 6 years ago
- DLFuzz: An Efficient Fuzzing Testing Framework of Deep Learning Systems☆52Updated 6 years ago
- ☆8Updated 5 years ago
- Reward Guided Test Generation for Deep Learning☆20Updated 5 months ago
- ☆47Updated 6 years ago
- Safety Verification of Deep Neural Networks☆50Updated 6 years ago
- Code corresponding to the paper "Adversarial Examples are not Easily Detected..."☆85Updated 7 years ago
- ☆242Updated 6 years ago
- A simple and accurate method to fool deep neural networks☆359Updated 4 years ago
- Benchmarking and Visualization Tool for Adversarial Machine Learning☆188Updated last year
- Model extraction attacks on Machine-Learning-as-a-Service platforms.☆343Updated 4 years ago
- ☆24Updated 3 years ago
- Testing Deep Neural Networks☆15Updated 6 years ago
- Pytorch code to generate adversarial examples on mnist and ImageNet data.☆116Updated 6 years ago
- Detecting Adversarial Examples in Deep Neural Networks☆66Updated 6 years ago
- ETH Robustness Analyzer for Deep Neural Networks☆323Updated last year
- Code base for "Deep Neural Networks are Easily Fooled" CVPR 2015 paper☆172Updated 7 years ago
- Countering Adversarial Image using Input Transformations.☆492Updated 3 years ago
- Code for "Black-box Adversarial Attacks with Limited Queries and Information" (http://arxiv.org/abs/1804.08598)☆174Updated 3 years ago
- Code for the IEEE S&P 2018 paper 'Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning'☆53Updated 3 years ago
- ☆101Updated 4 years ago
- Robust evasion attacks against neural network to find adversarial examples☆811Updated 3 years ago
- VizSec17: Web-based visualization tool for adversarial machine learning / LiveDemo☆130Updated last year