oscarknagg / adversarialLinks
Creating and defending against adversarial examples
☆42Updated 6 years ago
Alternatives and similar repositories for adversarial
Users that are interested in adversarial are comparing it to the libraries listed below
Sorting:
- A pytorch implementation of "Towards Evaluating the Robustness of Neural Networks"☆57Updated 5 years ago
- Code for ICLR2020 "Improving Adversarial Robustness Requires Revisiting Misclassified Examples"☆150Updated 4 years ago
- Code for the unrestricted adversarial examples paper (NeurIPS 2018)☆64Updated 5 years ago
- Universal Adversarial Perturbations (UAPs) for PyTorch☆48Updated 3 years ago
- Code for "On Adaptive Attacks to Adversarial Example Defenses"☆87Updated 4 years ago
- Repository for Certified Defenses for Adversarial Patch ICLR-2020☆33Updated 4 years ago
- Craft poisoned data using MetaPoison☆51Updated 4 years ago
- Attacks Which Do Not Kill Training Make Adversarial Learning Stronger (ICML2020 Paper)☆125Updated last year
- CLEVER (Cross-Lipschitz Extreme Value for nEtwork Robustness) is a robustness metric for deep neural networks☆61Updated 3 years ago
- Pytorch code for ens_adv_train☆15Updated 6 years ago
- Code for paper "Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality".☆124Updated 4 years ago
- Adversarial Distributional Training (NeurIPS 2020)☆63Updated 4 years ago
- ☆54Updated 2 years ago
- Semisupervised learning for adversarial robustness https://arxiv.org/pdf/1905.13736.pdf☆142Updated 5 years ago
- This repository contains implementation of 4 adversarial attacks : FGSM, Basic Iterative Method, Projected Gradient Descent(Madry's Attac…☆32Updated 6 years ago
- ☆158Updated 4 years ago
- Attacking a dog vs fish classification that uses transfer learning inceptionV3☆70Updated 7 years ago
- Code and data for the ICLR 2021 paper "Perceptual Adversarial Robustness: Defense Against Unseen Threat Models".☆55Updated 3 years ago
- Code for ICML2019 Paper "On the Convergence and Robustness of Adversarial Training"☆34Updated 5 years ago
- A unified benchmark problem for data poisoning attacks☆156Updated last year
- Blackbox attacks for deep neural network models☆70Updated 6 years ago
- ☆85Updated 4 years ago
- Code for "Diversity can be Transferred: Output Diversification for White- and Black-box Attacks"☆53Updated 4 years ago
- Understanding and Improving Fast Adversarial Training [NeurIPS 2020]☆95Updated 3 years ago
- This repository provides simple PyTorch implementations for adversarial training methods on CIFAR-10.☆167Updated 4 years ago
- Paper sharing in adversary related works☆45Updated last month
- Implementation for "Defense-VAE: A Fast and Accurate Defense against Adversarial Attacks"☆14Updated 4 years ago
- This repo keeps track of popular provable training and verification approaches towards robust neural networks, including leaderboards on …☆98Updated 2 years ago
- ☆9Updated 4 years ago
- This repository contains the official PyTorch implementation of GeoDA algorithm. GeoDA is a Black-box attack to generate adversarial exam…☆33Updated 4 years ago