google-research / selfstudy-adversarial-robustnessLinks
☆125Updated 3 years ago
Alternatives and similar repositories for selfstudy-adversarial-robustness
Users that are interested in selfstudy-adversarial-robustness are comparing it to the libraries listed below
Sorting:
- ARMORY Adversarial Robustness Evaluation Test Bed☆182Updated last year
- A unified benchmark problem for data poisoning attacks☆156Updated last year
- LaTeX source for the paper "On Evaluating Adversarial Robustness"☆255Updated 4 years ago
- Code for "On Adaptive Attacks to Adversarial Example Defenses"☆87Updated 4 years ago
- This repo keeps track of popular provable training and verification approaches towards robust neural networks, including leaderboards on …☆98Updated 2 years ago
- Code and data for the ICLR 2021 paper "Perceptual Adversarial Robustness: Defense Against Unseen Threat Models".☆55Updated 3 years ago
- Implementation of membership inference and model inversion attacks, extracting training data information from an ML model. Benchmarking …☆103Updated 5 years ago
- Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching☆103Updated 10 months ago
- Benchmarking and Visualization Tool for Adversarial Machine Learning☆187Updated 2 years ago
- A repository to quickly generate synthetic data and associated trojaned deep learning models☆77Updated 2 years ago
- ☆85Updated last year
- ☆145Updated 8 months ago
- Code corresponding to the paper "Adversarial Examples are not Easily Detected..."☆87Updated 7 years ago
- A certifiable defense against adversarial examples by training neural networks to be provably robust☆220Updated 10 months ago
- A Python library for Secure and Explainable Machine Learning☆180Updated 4 months ago
- Code for Auditing DPSGD☆37Updated 3 years ago
- Code for our NeurIPS 2019 *spotlight* "Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers"☆226Updated 5 years ago
- CLEVER (Cross-Lipschitz Extreme Value for nEtwork Robustness) is a robustness metric for deep neural networks☆61Updated 3 years ago
- Codes for reproducing the black-box adversarial attacks in “ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Network…☆58Updated 6 years ago
- ☆31Updated 9 months ago
- ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks☆169Updated 3 years ago
- Example external repository for interacting with armory.☆11Updated 3 years ago
- ☆51Updated 4 years ago
- A curated list of papers on adversarial machine learning (adversarial examples and defense methods).☆210Updated 3 years ago
- Knockoff Nets: Stealing Functionality of Black-Box Models☆99Updated 2 years ago
- Official repository for our NeurIPS 2021 paper "Unadversarial Examples: Designing Objects for Robust Vision"☆104Updated 10 months ago
- Certified defense to adversarial examples using CROWN and IBP. Also includes GPU implementation of CROWN verification algorithm (in PyTor…☆95Updated 4 years ago
- Creating and defending against adversarial examples☆42Updated 6 years ago
- Interfaces for defining Robust ML models and precisely specifying the threat models under which they claim to be secure.☆62Updated 6 years ago
- Official implementation for paper: A New Defense Against Adversarial Images: Turning a Weakness into a Strength☆38Updated 5 years ago