WangLab2021 / AI-Security
☆12Updated 10 months ago
Alternatives and similar repositories for AI-Security
Users that are interested in AI-Security are comparing it to the libraries listed below
Sorting:
- A Pytroch Implementation of Some Backdoor Attack Algorithms, Including BadNets, SIG, FIBA, FTrojan ...☆19Updated 5 months ago
- ☆44Updated last year
- WaNet - Imperceptible Warping-based Backdoor Attack (ICLR 2021)☆125Updated 6 months ago
- Revisiting Transferable Adversarial Images (arXiv)☆122Updated 2 months ago
- Official Repository for the AAAI-20 paper "Hidden Trigger Backdoor Attacks"☆127Updated last year
- Invisible Backdoor Attack with Sample-Specific Triggers☆94Updated 2 years ago
- A curated list of papers for the transferability of adversarial examples☆66Updated 10 months ago
- ☆19Updated 2 years ago
- ☆27Updated last year
- ☆31Updated 4 years ago
- Enhancing the Self-Universality for Transferable Targeted Attacks [CVPR 2023 Paper]☆35Updated last year
- Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks☆17Updated 6 years ago
- [ACM MM 2023] Improving the Transferability of Adversarial Examples with Arbitrary Style Transfer.☆18Updated last year
- IBA: Towards Irreversible Backdoor Attacks in Federated Learning (Poster at NeurIPS 2023)☆36Updated last year
- The official repo for the paper "An Adaptive Model Ensemble Adversarial Attack for Boosting Adversarial Transferability"☆40Updated last year
- [ICLR 2023, Best Paper Award at ECCV’22 AROW Workshop] FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning☆55Updated 5 months ago
- Spectrum simulation attack (ECCV'2022 Oral) towards boosting the transferability of adversarial examples☆104Updated 2 years ago
- Convert tensorflow model to pytorch model via [MMdnn](https://github.com/microsoft/MMdnn) for adversarial attacks.☆85Updated 2 years ago
- ☆105Updated last year
- ☆12Updated last year
- Input Purification Defense Against Trojan Attacks on Deep Neural Network Systems☆27Updated 4 years ago
- [CVPR 2023] The official implementation of our CVPR 2023 paper "Detecting Backdoors During the Inference Stage Based on Corruption Robust…☆23Updated last year
- ☆26Updated 2 years ago
- Implementation of the paper "MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation".☆30Updated 3 years ago
- [NeurIPS 2023] Boosting Adversarial Transferability by Achieving Flat Local Maxima☆30Updated last year
- [ICLR2024] "Backdoor Federated Learning by Poisoning Backdoor-Critical Layers"☆33Updated 5 months ago
- Stochastic Variance Reduced Ensemble Adversarial Attack for Boosting the Adversarial Transferability☆24Updated 2 years ago
- PyTorch implementation of Expectation over Transformation☆13Updated 2 years ago
- official PyTorch implement of Towards Adversarial Attack on Vision-Language Pre-training Models☆58Updated 2 years ago
- ☆34Updated 7 months ago