google-research / selfstudy-adversarial-robustness
☆123Updated 3 years ago
Alternatives and similar repositories for selfstudy-adversarial-robustness:
Users that are interested in selfstudy-adversarial-robustness are comparing it to the libraries listed below
- ARMORY Adversarial Robustness Evaluation Test Bed☆179Updated last year
- Code for "On Adaptive Attacks to Adversarial Example Defenses"☆87Updated 4 years ago
- A repository to quickly generate synthetic data and associated trojaned deep learning models☆78Updated last year
- Implementation of membership inference and model inversion attacks, extracting training data information from an ML model. Benchmarking …☆103Updated 5 years ago
- A unified benchmark problem for data poisoning attacks☆155Updated last year
- Code corresponding to the paper "Adversarial Examples are not Easily Detected..."☆85Updated 7 years ago
- This repo keeps track of popular provable training and verification approaches towards robust neural networks, including leaderboards on …☆99Updated 2 years ago
- LaTeX source for the paper "On Evaluating Adversarial Robustness"☆255Updated 4 years ago
- Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching☆101Updated 7 months ago
- Benchmarking and Visualization Tool for Adversarial Machine Learning☆187Updated 2 years ago
- Knockoff Nets: Stealing Functionality of Black-Box Models☆95Updated 2 years ago
- Code and data for the ICLR 2021 paper "Perceptual Adversarial Robustness: Defense Against Unseen Threat Models".☆55Updated 3 years ago
- CLEVER (Cross-Lipschitz Extreme Value for nEtwork Robustness) is a robustness metric for deep neural networks☆61Updated 3 years ago
- ☆144Updated 6 months ago
- ☆84Updated last year
- Craft poisoned data using MetaPoison☆50Updated 4 years ago
- A community-run reference for state-of-the-art adversarial example defenses.☆50Updated 6 months ago
- ☆31Updated 7 months ago
- A united toolbox for running major robustness verification approaches for DNNs. [S&P 2023]☆89Updated 2 years ago
- A certifiable defense against adversarial examples by training neural networks to be provably robust☆219Updated 8 months ago
- Privacy Testing for Deep Learning☆202Updated last year
- Code for Auditing DPSGD☆37Updated 3 years ago
- Code for "Testing Robustness Against Unforeseen Adversaries"☆81Updated 8 months ago
- A curated list of papers on adversarial machine learning (adversarial examples and defense methods).☆210Updated 2 years ago
- Library for training globally-robust neural networks.☆28Updated last year
- A Python library for Secure and Explainable Machine Learning☆173Updated 2 months ago
- ☆85Updated 4 years ago
- code for model-targeted poisoning☆12Updated last year
- Code for "Black-box Adversarial Attacks with Limited Queries and Information" (http://arxiv.org/abs/1804.08598)☆177Updated 3 years ago
- Interfaces for defining Robust ML models and precisely specifying the threat models under which they claim to be secure.☆62Updated 5 years ago