WangLab2021 / AI-SecurityLinks
☆12Updated 2 months ago
Alternatives and similar repositories for AI-Security
Users that are interested in AI-Security are comparing it to the libraries listed below
Sorting:
- WaNet - Imperceptible Warping-based Backdoor Attack (ICLR 2021)☆133Updated last year
- Invisible Backdoor Attack with Sample-Specific Triggers☆105Updated 3 years ago
- A curated list of papers & resources on backdoor attacks and defenses in deep learning.☆235Updated last year
- ☆580Updated 7 months ago
- A Pytroch Implementation of Some Backdoor Attack Algorithms, Including BadNets, SIG, FIBA, FTrojan ...☆22Updated last year
- Official Repository for the AAAI-20 paper "Hidden Trigger Backdoor Attacks"☆133Updated 2 years ago
- The implementation of the IEEE S&P 2024 paper MM-BD: Post-Training Detection of Backdoor Attacks with Arbitrary Backdoor Pattern Types Us…☆16Updated last year
- ☆27Updated 3 years ago
- TransferAttack is a pytorch framework to boost the adversarial transferability for image classification.☆437Updated 3 weeks ago
- ☆45Updated 2 years ago
- A curated list of papers & resources linked to data poisoning, backdoor attacks and defenses against them (no longer maintained)☆286Updated last year
- Revisiting Transferable Adversarial Images (TPAMI 2025)☆140Updated 4 months ago
- ☆32Updated 4 years ago
- A curated list of papers for the transferability of adversarial examples☆76Updated last year
- ☆10Updated last year
- A paper list for localized adversarial patch research☆160Updated 6 months ago
- ☆17Updated last year
- [NeurIPS 2023] Boosting Adversarial Transferability by Achieving Flat Local Maxima☆34Updated last year
- ☆44Updated last year
- Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks☆18Updated 6 years ago
- [ICCV-2023] Gradient inversion attack, Federated learning, Generative adversarial network.☆51Updated last year
- Spectrum simulation attack (ECCV'2022 Oral) towards boosting the transferability of adversarial examples☆114Updated 3 years ago
- This repository contains Python code for the paper "Learn What You Want to Unlearn: Unlearning Inversion Attacks against Machine Unlearni…☆19Updated last year
- ☆128Updated 4 months ago
- ☆22Updated 3 years ago
- The official implementation of the IEEE S&P`22 paper "SoK: How Robust is Deep Neural Network Image Classification Watermarking".☆117Updated 2 years ago
- Website & Documentation: https://sbaresearch.github.io/model-watermarking/☆25Updated 2 years ago
- Convert tensorflow model to pytorch model via [MMdnn](https://github.com/microsoft/MMdnn) for adversarial attacks.☆94Updated 3 years ago
- Input Purification Defense Against Trojan Attacks on Deep Neural Network Systems☆28Updated 4 years ago
- [NDSS 2025] Official code for our paper "Explanation as a Watermark: Towards Harmless and Multi-bit Model Ownership Verification via Wate…☆45Updated last year