oscarknagg / adversarialLinks
Creating and defending against adversarial examples
☆41Updated 6 years ago
Alternatives and similar repositories for adversarial
Users that are interested in adversarial are comparing it to the libraries listed below
Sorting:
- Code for "On Adaptive Attacks to Adversarial Example Defenses"☆87Updated 4 years ago
- A unified benchmark problem for data poisoning attacks☆160Updated 2 years ago
- Attacks Which Do Not Kill Training Make Adversarial Learning Stronger (ICML2020 Paper)☆126Updated 2 years ago
- A curated list of papers on adversarial machine learning (adversarial examples and defense methods).☆212Updated 3 years ago
- This repository contains implementation of 4 adversarial attacks : FGSM, Basic Iterative Method, Projected Gradient Descent(Madry's Attac…☆31Updated 6 years ago
- Code for ICLR2020 "Improving Adversarial Robustness Requires Revisiting Misclassified Examples"☆152Updated 5 years ago
- Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching☆110Updated last year
- Code for paper "Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality".☆125Updated 4 years ago
- A pytorch implementation of "Towards Evaluating the Robustness of Neural Networks"☆59Updated 6 years ago
- This repository provides simple PyTorch implementations for adversarial training methods on CIFAR-10.☆171Updated 4 years ago
- ☆57Updated 2 years ago
- ☆160Updated 4 years ago
- CLEVER (Cross-Lipschitz Extreme Value for nEtwork Robustness) is a robustness metric for deep neural networks☆63Updated 4 years ago
- Repository for Certified Defenses for Adversarial Patch ICLR-2020☆34Updated 5 years ago
- PyTorch-1.0 implementation for the adversarial training on MNIST/CIFAR-10 and visualization on robustness classifier.☆253Updated 5 years ago
- Craft poisoned data using MetaPoison☆53Updated 4 years ago
- Adversarial Examples: Attacks and Defenses for Deep Learning☆32Updated 7 years ago
- [ICLR 2020] A repository for extremely fast adversarial training using FGSM☆449Updated last year
- Code and data for the ICLR 2021 paper "Perceptual Adversarial Robustness: Defense Against Unseen Threat Models".☆56Updated 3 years ago
- This repo keeps track of popular provable training and verification approaches towards robust neural networks, including leaderboards on …☆98Updated 3 years ago
- Attacking a dog vs fish classification that uses transfer learning inceptionV3☆71Updated 7 years ago
- Privacy Risks of Securing Machine Learning Models against Adversarial Examples☆45Updated 5 years ago
- Code for the unrestricted adversarial examples paper (NeurIPS 2018)☆65Updated 6 years ago
- ☆26Updated 6 years ago
- ☆32Updated last year
- A rich-documented PyTorch implementation of Carlini-Wagner's L2 attack.☆60Updated 7 years ago
- Provable adversarial robustness at ImageNet scale☆401Updated 6 years ago
- Implementation of Wasserstein adversarial attacks.☆23Updated 4 years ago
- Code for "Detecting Adversarial Samples from Artifacts" (Feinman et al., 2017)☆111Updated 7 years ago
- Understanding Catastrophic Overfitting in Single-step Adversarial Training [AAAI 2021]☆28Updated 3 years ago