Xinghui-Wu / KENKU
KENKU: Towards Efficient and Stealthy Black-box Adversarial Attacks against ASR Systems
☆12Updated 11 months ago
Related projects: ⓘ
- Source Code for 'SECurity evaluation platform FOR Speaker Recognition' released in 'Defending against Audio Adversarial Examples on Speak…☆25Updated last year
- Pytorch implementation of Backdoor Attack against Speaker Verification☆23Updated last year
- ☆11Updated 3 months ago
- Code of paper "AdvReverb: AdvReverb: Rethinking the Stealthiness of Audio Adversarial Examples to Human Perception"☆14Updated 9 months ago
- A data-free backdoor detector for deep neural networks☆12Updated last year
- ABS: Scanning Neural Networks for Back-doors by Artificial Brain Stimulation☆47Updated 2 years ago
- The official implementation of the IEEE S&P`22 paper "SoK: How Robust is Deep Neural Network Image Classification Watermarking".☆99Updated last year
- This is for releasing the source code of the ACSAC paper "STRIP: A Defence Against Trojan Attacks on Deep Neural Networks"☆47Updated 4 years ago
- Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks (IEEE S&P 2024)☆29Updated 6 months ago
- Code release for DeepJudge (S&P'22)☆50Updated last year
- ☆17Updated 2 years ago
- Invisible Backdoor Attack with Sample-Specific Triggers☆85Updated 2 years ago
- a Pytorch library for security research on speaker recognition, released in "Towards Understanding and Mitigating Audio Adversarial Examp…☆34Updated 4 months ago
- Devil-Whisper-Attack☆32Updated 4 years ago
- Code for "PatchCleanser: Certifiably Robust Defense against Adversarial Patches for Any Image Classifier"☆34Updated last year
- TrojanZoo is a universal pytorch platform for conducting security researches (especially for backdoor attacks/defenses) for image classif…☆17Updated 3 years ago
- ☆14Updated 3 months ago
- ☆44Updated 3 years ago
- Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks (RAID 2018)☆47Updated 5 years ago
- Robust Audio Adversarial Example for a Physical Attack☆60Updated 4 years ago
- ☆14Updated 2 years ago
- ☆73Updated 3 years ago
- Code for "CloudLeak: Large-Scale Deep Learning Models Stealing Through Adversarial Examples" (NDSS 2020)☆18Updated 3 years ago
- Code for the paper: Label-Only Membership Inference Attacks☆61Updated 3 years ago
- ☆24Updated last year
- Input Purification Defense Against Trojan Attacks on Deep Neural Network Systems☆23Updated 3 years ago
- This is the documentation of the Tensorflow/Keras implementation of Latent Backdoor Attacks. Please see the paper for details Latent Back…☆18Updated 3 years ago
- ☆19Updated 4 years ago
- Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''☆49Updated last year
- ☆23Updated last year