WangLab2021 / AI-SecurityLinks
☆12Updated 3 weeks ago
Alternatives and similar repositories for AI-Security
Users that are interested in AI-Security are comparing it to the libraries listed below
Sorting:
- A Pytroch Implementation of Some Backdoor Attack Algorithms, Including BadNets, SIG, FIBA, FTrojan ...☆20Updated 9 months ago
- Invisible Backdoor Attack with Sample-Specific Triggers☆99Updated 3 years ago
- ☆537Updated 2 months ago
- WaNet - Imperceptible Warping-based Backdoor Attack (ICLR 2021)☆128Updated 10 months ago
- Official Repository for the AAAI-20 paper "Hidden Trigger Backdoor Attacks"☆130Updated last year
- A curated list of papers & resources on backdoor attacks and defenses in deep learning.☆218Updated last year
- ☆115Updated 3 months ago
- TransferAttack is a pytorch framework to boost the adversarial transferability for image classification.☆387Updated last week
- ☆14Updated last year
- Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks☆17Updated 6 years ago
- The official repo for the paper "An Adaptive Model Ensemble Adversarial Attack for Boosting Adversarial Transferability"☆41Updated last year
- [ICCV-2023] Gradient inversion attack, Federated learning, Generative adversarial network.☆45Updated last year
- A paper list for localized adversarial patch research☆157Updated last month
- ☆21Updated 2 years ago
- Revisiting Transferable Adversarial Images (arXiv)☆129Updated 6 months ago
- ☆46Updated last year
- A curated list of papers & resources linked to data poisoning, backdoor attacks and defenses against them (no longer maintained)☆269Updated 8 months ago
- Spectrum simulation attack (ECCV'2022 Oral) towards boosting the transferability of adversarial examples☆111Updated 3 years ago
- A curated list of papers for the transferability of adversarial examples☆73Updated last year
- This is the repository for USENIX Security 2023 paper "Hard-label Black-box Universal Adversarial Patch Attack".☆15Updated 2 years ago
- [NeurIPS 2023] Boosting Adversarial Transferability by Achieving Flat Local Maxima☆31Updated last year
- official PyTorch implement of Towards Adversarial Attack on Vision-Language Pre-training Models☆63Updated 2 years ago
- [AAAI 2024] Data-Free Hard-Label Robustness Stealing Attack☆15Updated last year
- ☆40Updated 11 months ago
- Convert tensorflow model to pytorch model via [MMdnn](https://github.com/microsoft/MMdnn) for adversarial attacks.☆90Updated 2 years ago
- ☆39Updated 5 months ago
- PyTorch implementation of Expectation over Transformation☆13Updated last month
- ☆27Updated 2 years ago
- Official implementation of (CVPR 2022 Oral) Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks.☆26Updated 2 months ago
- 复现了下Neural Cleanse这篇论文,真的是简单而有效,发在了okaland☆31Updated 4 years ago