TzviLederer / silent-killer
Implementation of the paper Silent Killer
☆25Updated last year
Alternatives and similar repositories for silent-killer
Users that are interested in silent-killer are comparing it to the libraries listed below
Sorting:
- An Embarrassingly Simple Backdoor Attack on Self-supervised Learning☆16Updated last year
- ☆51Updated 3 years ago
- Code for "Learning Universal Adversarial Perturbation by Adversarial Example"☆8Updated 3 years ago
- [ACM MM 2023] Improving the Transferability of Adversarial Examples with Arbitrary Style Transfer.☆18Updated last year
- Code for Transferable Unlearnable Examples☆20Updated 2 years ago
- Stochastic Variance Reduced Ensemble Adversarial Attack for Boosting the Adversarial Transferability☆24Updated 2 years ago
- Code for "Label-Consistent Backdoor Attacks"☆57Updated 4 years ago
- ☆29Updated 11 months ago
- ☆26Updated 2 years ago
- ☆34Updated 7 months ago
- The code for the paper titled as "DifAttack: Query-Efficient Black-Box Attack via Disentangled Feature Space".☆18Updated 3 months ago
- Simple yet effective targeted transferable attack (NeurIPS 2021)☆51Updated 2 years ago
- Implementation of the paper "MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation".☆30Updated 3 years ago
- APBench: A Unified Availability Poisoning Attack and Defenses Benchmark (TMLR 08/2024)☆30Updated last month
- [AAAI 2024] Data-Free Hard-Label Robustness Stealing Attack☆12Updated last year
- Official Pytorch implementation for "Transferable Adversarial Attacks on Vision Transformers with Token Gradient Regularization" (CVPR 20…☆27Updated last year
- ☆56Updated last year
- Revisiting Transferable Adversarial Images (arXiv)☆122Updated 2 months ago
- A minimal PyTorch implementation of Label-Consistent Backdoor Attacks☆30Updated 4 years ago
- ☆25Updated 2 years ago
- ☆18Updated last year
- [AAAI 2023] Pseudo Label-Guided Model Inversion Attack via Conditional Generative Adversarial Network☆29Updated 7 months ago
- ☆17Updated 3 years ago
- Convert tensorflow model to pytorch model via [MMdnn](https://github.com/microsoft/MMdnn) for adversarial attacks.☆86Updated 2 years ago
- ☆12Updated 10 months ago
- Implementation of IEEE TNNLS 2023 and Elsevier PR 2023 papers on backdoor watermarking for deep classification models with unambiguity an…☆16Updated last year
- Website & Documentation: https://sbaresearch.github.io/model-watermarking/☆23Updated last year
- [CCS'22] SSLGuard: A Watermarking Scheme for Self-supervised Learning Pre-trained Encoders☆19Updated 2 years ago
- Towards Efficient and Effective Adversarial Training, NeurIPS 2021☆17Updated 3 years ago
- Spectrum simulation attack (ECCV'2022 Oral) towards boosting the transferability of adversarial examples☆105Updated 2 years ago