twosixlabs / armoryLinks
ARMORY Adversarial Robustness Evaluation Test Bed
☆181Updated last year
Alternatives and similar repositories for armory
Users that are interested in armory are comparing it to the libraries listed below
Sorting:
- ☆124Updated 3 years ago
- Privacy Testing for Deep Learning☆205Updated last year
- A Python library for Secure and Explainable Machine Learning☆176Updated 4 months ago
- Example external repository for interacting with armory.☆11Updated 3 years ago
- A unified benchmark problem for data poisoning attacks☆155Updated last year
- PhD/MSc course on Machine Learning Security (Univ. Cagliari)☆210Updated 5 months ago
- LaTeX source for the paper "On Evaluating Adversarial Robustness"☆255Updated 4 years ago
- Code for "On Adaptive Attacks to Adversarial Example Defenses"☆87Updated 4 years ago
- A curated list of academic events on AI Security & Privacy☆152Updated 9 months ago
- A curated list of papers on adversarial machine learning (adversarial examples and defense methods).☆210Updated 3 years ago
- A repository to quickly generate synthetic data and associated trojaned deep learning models☆77Updated last year
- Code corresponding to the paper "Adversarial Examples are not Easily Detected..."☆86Updated 7 years ago
- Copycat CNN☆28Updated last year
- Implementation of membership inference and model inversion attacks, extracting training data information from an ML model. Benchmarking …☆103Updated 5 years ago
- ☆40Updated last year
- Benchmarking and Visualization Tool for Adversarial Machine Learning☆187Updated 2 years ago
- a CLI that provides a generic automation layer for assessing the security of ML models☆860Updated last year
- This repo keeps track of popular provable training and verification approaches towards robust neural networks, including leaderboards on …☆99Updated 2 years ago
- ☆23Updated 3 years ago
- Code for ICML 2019 paper "Simple Black-box Adversarial Attacks"☆198Updated 2 years ago
- ☆96Updated 4 years ago
- Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching☆102Updated 9 months ago
- 💡 Adversarial attacks on explanations and how to defend them☆315Updated 6 months ago
- Modular Adversarial Robustness Toolkit☆19Updated this week
- CLEVER (Cross-Lipschitz Extreme Value for nEtwork Robustness) is a robustness metric for deep neural networks☆61Updated 3 years ago
- ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks☆169Updated 3 years ago
- Code and data for the ICLR 2021 paper "Perceptual Adversarial Robustness: Defense Against Unseen Threat Models".☆55Updated 3 years ago
- ☆144Updated 7 months ago
- Code for our NeurIPS 2019 *spotlight* "Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers"☆225Updated 5 years ago
- ☆65Updated last year