zhangao520 / defense-vgae
DefenseVGAE
☆7Updated 4 years ago
Related projects ⓘ
Alternatives and complementary repositories for defense-vgae
- EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples☆38Updated 6 years ago
- ☆11Updated 4 years ago
- A general method for training cost-sensitive robust classifier☆21Updated 5 years ago
- Codebase for the paper "Adversarial Attacks on Time Series"☆18Updated 5 years ago
- Is RobustBench/AutoAttack a suitable Benchmark for Adversarial Robustness?☆11Updated 2 years ago
- Implementation of paper "Transferring Robustness for Graph Neural Network Against Poisoning Attacks".☆20Updated 4 years ago
- Codebase for the paper "Adversarial Attacks on Time Series"☆20Updated 5 years ago
- Pytorch implementation of Backdoor Attack against Speaker Verification☆23Updated last year
- Implementation of the Biased Boundary Attack for the NeurIPS 2018 Adversarial Vision Challenge☆13Updated 4 years ago
- ☆32Updated 6 years ago
- ☆21Updated 4 years ago
- ☆9Updated 6 years ago
- Implementation of the Biased Boundary Attack for ImageNet☆23Updated 5 years ago
- ☆18Updated 2 years ago
- Fooling neural based speech recognition systems.☆14Updated 7 years ago
- ☆23Updated last year
- The code is for our NeurIPS 2019 paper: https://arxiv.org/abs/1910.04749☆31Updated 4 years ago
- ☆10Updated 3 years ago
- Code repository for Blackbox Attacks via Surrogate Ensemble Search (BASES), NeurIPS 2022☆10Updated 3 months ago
- Code release for the ICML 2019 paper "Are generative classifiers more robust to adversarial attacks?"☆23Updated 5 years ago
- Code for the Adversarial Image Detectors and a Saliency Map☆12Updated 7 years ago
- Implementation for Poison Attacks against Text Datasets with Conditional Adversarially Regularized Autoencoder (EMNLP-Findings 2020)☆15Updated 4 years ago
- ☆23Updated 5 years ago
- It turns out that adversarial and clean data are not twins, not at all.☆19Updated 7 years ago
- Codes for reproducing the white-box adversarial attacks in “EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples,” …☆21Updated 6 years ago
- ☆44Updated 3 years ago
- Example of the attack described in the paper "Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization"☆21Updated 5 years ago
- Code for AAAI 2018 accepted paper: "Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing the…☆54Updated last year
- Circumventing the defense in "Ensemble Adversarial Training: Attacks and Defenses"☆39Updated 6 years ago
- Codes for reproducing the black-box adversarial attacks in “ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Network…☆55Updated 5 years ago