anishmadan23 / adversarial-attacks-pytorch
This repository contains implementation of 4 adversarial attacks : FGSM, Basic Iterative Method, Projected Gradient Descent(Madry's Attack), and Carlini Wagner's L2 attack. Also contained is the code to visualise it, along with a detailed report and a poster explaining the various attacks.
☆31Updated 5 years ago
Related projects: ⓘ
- Code for "Diversity can be Transferred: Output Diversification for White- and Black-box Attacks"☆52Updated 3 years ago
- ☆53Updated last year
- Code for ICLR2020 "Improving Adversarial Robustness Requires Revisiting Misclassified Examples"☆143Updated 3 years ago
- Code for paper "Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality".☆121Updated 3 years ago
- Code for the unrestricted adversarial examples paper (NeurIPS 2018)☆63Updated 5 years ago
- code we used in Decision Boundary Analysis of Adversarial Examples https://openreview.net/forum?id=BkpiPMbA-☆27Updated 5 years ago
- ATTA (Efficient Adversarial Training with Transferable Adversarial Examples)☆32Updated 4 years ago
- A rich-documented PyTorch implementation of Carlini-Wagner's L2 attack.☆59Updated 6 years ago
- Adversarial Defense for Ensemble Models (ICML 2019)☆60Updated 3 years ago
- Feature Scattering Adversarial Training (NeurIPS19)☆71Updated 3 months ago
- Implementation of the Boundary Attack algorithm as described in Brendel, Wieland, Jonas Rauber, and Matthias Bethge. "Decision-Based Adve…☆90Updated 3 years ago
- ☆41Updated last year
- Adversarial Examples: Attacks and Defenses for Deep Learning☆31Updated 6 years ago
- ☆76Updated 3 years ago
- Repository for Certified Defenses for Adversarial Patch ICLR-2020☆32Updated 4 years ago
- AAAI 2019 oral presentation☆49Updated last month
- Code for "Detecting Adversarial Samples from Artifacts" (Feinman et al., 2017)☆108Updated 6 years ago
- Code for AAAI 2018 accepted paper: "Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing the…☆53Updated last year
- Interval attacks (adversarial ML)☆21Updated 5 years ago
- CLEVER (Cross-Lipschitz Extreme Value for nEtwork Robustness) is a robustness metric for deep neural networks☆60Updated 3 years ago
- Codes for reproducing query-efficient black-box attacks in “AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking B…☆56Updated 4 years ago
- Code for Stability Training with Noise (STN)☆21Updated 3 years ago
- Code for Black-Box Adversarial Attack with Transferable Model-based Embedding☆55Updated 4 years ago
- Code and data for the ICLR 2021 paper "Perceptual Adversarial Robustness: Defense Against Unseen Threat Models".☆54Updated 2 years ago
- ☆46Updated 3 years ago
- Attacking a dog vs fish classification that uses transfer learning inceptionV3☆67Updated 6 years ago
- Code for our NeurIPS 2020 paper Backpropagating Linearly Improves Transferability of Adversarial Examples.☆42Updated last year
- ☆55Updated 2 years ago
- Code for "On Adaptive Attacks to Adversarial Example Defenses"☆84Updated 3 years ago
- Code for the paper "(De)Randomized Smoothing for Certifiable Defense against Patch Attacks" by Alexander Levine and Soheil Feizi.☆16Updated 2 years ago