zhangao520 / defense-vgae
DefenseVGAE
☆7Updated 4 years ago
Related projects: ⓘ
- A general method for training cost-sensitive robust classifier☆21Updated 5 years ago
- ☆21Updated 4 years ago
- Fooling neural based speech recognition systems.☆14Updated 7 years ago
- Codebase for the paper "Adversarial Attacks on Time Series"☆18Updated 5 years ago
- ☆19Updated 3 years ago
- EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples☆37Updated 5 years ago
- Codebase for the paper "Adversarial Attacks on Time Series"☆20Updated 5 years ago
- ☆18Updated 2 years ago
- Official codebase of our paper "Invert and Defend: Model-based Approximate Inversion of Generative Adversarial Network For Secure Inferen…☆15Updated last year
- Implementation of paper "Transferring Robustness for Graph Neural Network Against Poisoning Attacks".☆19Updated 4 years ago
- Code for the Adversarial Image Detectors and a Saliency Map☆12Updated 7 years ago
- ☆22Updated last year
- Implementation for Poison Attacks against Text Datasets with Conditional Adversarially Regularized Autoencoder (EMNLP-Findings 2020)☆15Updated 3 years ago
- PyTorch code for KDD 18 paper: Towards Explanation of DNN-based Prediction with Guided Feature Inversion☆22Updated 5 years ago
- ☆11Updated 4 years ago
- Implementation for Jacobian Adversarially Regularized Networks for Robustness (ICLR 2020)☆21Updated 4 years ago
- Honest-but-Curious Nets: Sensitive Attributes of Private Inputs Can Be Secretly Coded into the Classifiers' Outputs (ACM CCS'21)☆17Updated last year
- Code for IJCAI 2019 paper "Real-time Adversarial Attack".☆20Updated 4 years ago
- This is the official implementation of ClusTR: Clustering Training for Robustness paper.☆20Updated 2 years ago
- Implementation for What it Thinks is Important is Important: Robustness Transfers through Input Gradients (CVPR 2020 Oral)☆16Updated last year
- Pytorch implementation of Backdoor Attack against Speaker Verification☆23Updated last year
- ☆23Updated 5 years ago
- A Restricted Black-box Adversarial Framework Towards Attacking Graph Embedding Models☆36Updated 3 years ago
- Code release for the ICML 2019 paper "Are generative classifiers more robust to adversarial attacks?"☆23Updated 5 years ago
- ☆12Updated 3 years ago
- Coupling rejection strategy against adversarial attacks (CVPR 2022)☆28Updated 2 years ago
- Codes for reproducing the white-box adversarial attacks in “EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples,” …☆21Updated 5 years ago
- It turns out that adversarial and clean data are not twins, not at all.☆19Updated 7 years ago
- Interval attacks (adversarial ML)☆21Updated 5 years ago
- Adversarial Attacks on Node Embeddings via Graph Poisoning☆59Updated 4 years ago