vu-aml / adlib
Game-Theoretic Adversarial Machine Learning Library
☆58Updated 6 years ago
Related projects ⓘ
Alternatives and complementary repositories for adlib
- Detecting Adversarial Examples in Deep Neural Networks☆66Updated 6 years ago
- Interfaces for defining Robust ML models and precisely specifying the threat models under which they claim to be secure.☆62Updated 5 years ago
- Implementation of membership inference and model inversion attacks, extracting training data information from an ML model. Benchmarking …☆99Updated 5 years ago
- Task-agnostic universal black-box attacks on computer vision neural network via procedural noise (CCS'19)☆55Updated 3 years ago
- Code corresponding to the paper "Adversarial Examples are not Easily Detected..."☆84Updated 7 years ago
- Code for the IEEE S&P 2018 paper 'Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning'☆52Updated 3 years ago
- ☆32Updated 6 years ago
- EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples☆38Updated 6 years ago
- VizSec17: Web-based visualization tool for adversarial machine learning / LiveDemo☆130Updated last year
- Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks [NeurIPS 2019]☆50Updated 4 years ago
- Code for "Detecting Adversarial Samples from Artifacts" (Feinman et al., 2017)☆108Updated 6 years ago
- [ICML 2019, 20 min long talk] Robust Decision Trees Against Adversarial Examples☆67Updated 2 years ago
- Athena: A Framework for Defending Machine Learning Systems Against Adversarial Attacks☆42Updated 3 years ago
- Benchmarking and Visualization Tool for Adversarial Machine Learning☆186Updated last year
- Circumventing the defense in "Ensemble Adversarial Training: Attacks and Defenses"☆39Updated 6 years ago
- Ensemble Adversarial Training on MNIST☆121Updated 7 years ago
- A community-run reference for state-of-the-art adversarial example defenses.☆49Updated last month
- A certifiable defense against adversarial examples by training neural networks to be provably robust☆218Updated 3 months ago
- Code for "Black-box Adversarial Attacks with Limited Queries and Information" (http://arxiv.org/abs/1804.08598)☆174Updated 3 years ago
- Privacy Risks of Securing Machine Learning Models against Adversarial Examples☆44Updated 4 years ago
- Interval attacks (adversarial ML)☆21Updated 5 years ago
- AAAI 2019 oral presentation☆50Updated 3 months ago
- Adversarial Examples: Attacks and Defenses for Deep Learning☆31Updated 6 years ago
- Library and experiments for attacking machine learning in discrete domains☆45Updated last year
- Codes for reproducing the robustness evaluation scores in “Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approac…☆47Updated 6 years ago
- An implementation of the 'fast gradient sign method' from the paper 'Explaining and Harnessing Adversarial Examples'☆53Updated 7 years ago
- Towards Reverse-Engineering Black-Box Neural Networks, ICLR'18☆54Updated 5 years ago
- It turns out that adversarial and clean data are not twins, not at all.☆19Updated 7 years ago
- Codes for reproducing the white-box adversarial attacks in “EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples,” …☆21Updated 6 years ago
- Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network☆62Updated 5 years ago