matthewwicker / SafeCV
Vision based algorithms for falsification of convolutional neural networks
☆12Updated 7 years ago
Alternatives and similar repositories for SafeCV:
Users that are interested in SafeCV are comparing it to the libraries listed below
- Concolic Testing for Deep Neural Networks☆119Updated 3 years ago
- Testing Deep Neural Networks☆15Updated 6 years ago
- Safety Verification of Deep Neural Networks☆50Updated 7 years ago
- ☆24Updated 4 years ago
- Detecting Adversarial Examples in Deep Neural Networks☆67Updated 7 years ago
- Ensemble Adversarial Training on MNIST☆121Updated 7 years ago
- Code for "Detecting Adversarial Samples from Artifacts" (Feinman et al., 2017)☆109Updated 7 years ago
- Code corresponding to the paper "Adversarial Examples are not Easily Detected..."☆85Updated 7 years ago
- Code for paper "Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality".☆123Updated 4 years ago
- ☆47Updated 6 years ago
- The released code of Neurify in NIPS 2018☆49Updated 2 years ago
- DLFuzz: An Efficient Fuzzing Testing Framework of Deep Learning Systems☆52Updated 6 years ago
- ☆24Updated 4 years ago
- MagNet: a Two-Pronged Defense against Adversarial Examples☆98Updated 6 years ago
- ☆28Updated 7 years ago
- ☆26Updated 2 years ago
- Code release for RobOT (ICSE'21)☆15Updated 2 years ago
- Code for "Black-box Adversarial Attacks with Limited Queries and Information" (http://arxiv.org/abs/1804.08598)☆177Updated 3 years ago
- This repo keeps track of popular provable training and verification approaches towards robust neural networks, including leaderboards on …☆99Updated 2 years ago
- Benchmarking and Visualization Tool for Adversarial Machine Learning☆187Updated 2 years ago
- The released code of ReluVal in USENIX Security 2018☆59Updated 5 years ago
- A systematic testing tool for automatically detecting erroneous behaviors of DNN-driven vehicles☆81Updated 6 years ago
- This repository is for NeurIPS 2018 spotlight paper "Attacks Meet Interpretability: Attribute-steered Detection of Adversarial Samples."☆31Updated 2 years ago
- ☆9Updated 5 years ago
- Crafting adversarial images☆223Updated 6 years ago
- Robustness benchmark for DNN models.☆67Updated 2 years ago
- Code release of a paper "Guiding Deep Learning System Testing using Surprise Adequacy"☆46Updated 2 years ago
- Robustness vs Accuracy Survey on ImageNet☆98Updated 3 years ago
- Reward Guided Test Generation for Deep Learning☆20Updated 8 months ago
- Code used in 'Exploring the Space of Black-box Attacks on Deep Neural Networks' (https://arxiv.org/abs/1712.09491)☆61Updated 7 years ago