spencerwooo / torchattack
π‘ A curated list of adversarial attacks in PyTorch, with a focus on transferable black-box attacks.
β53Updated 3 weeks ago
Alternatives and similar repositories for torchattack:
Users that are interested in torchattack are comparing it to the libraries listed below
- Convert tensorflow model to pytorch model via [MMdnn](https://github.com/microsoft/MMdnn) for adversarial attacks.β85Updated 2 years ago
- β13Updated last year
- β69Updated 8 months ago
- This is the official implementation of our paper 'Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protectiβ¦β55Updated last year
- π up-to-date & curated list of awesome Attacks on Large-Vision-Language-Models papers, methods & resources.β256Updated this week
- A curated list of papers & resources on backdoor attacks and defenses in deep learning.β198Updated last year
- WaNet - Imperceptible Warping-based Backdoor Attack (ICLR 2021)β121Updated 4 months ago
- Code for ACM MM2024 paper: White-box Multimodal Jailbreaks Against Large Vision-Language Modelsβ24Updated 3 months ago
- β80Updated 3 years ago
- This is an official repository of ``VLAttack: Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models'' (NeurIPS 2β¦β50Updated last week
- Official repo to reproduce the paper "How to Backdoor Diffusion Models?" published at CVPR 2023β87Updated 6 months ago
- β100Updated 11 months ago
- β24Updated last year
- Revisiting Transferable Adversarial Images (arXiv)β122Updated 3 weeks ago
- A list of recent papers about adversarial learningβ128Updated this week
- official PyTorch implement of Towards Adversarial Attack on Vision-Language Pre-training Modelsβ59Updated 2 years ago
- β31Updated 2 years ago
- A list of papers in NeurIPS 2022 related to adversarial attack and defense / AI security.β71Updated 2 years ago
- Invisible Backdoor Attack with Sample-Specific Triggersβ94Updated 2 years ago
- [CVPR23W] "A Pilot Study of Query-Free Adversarial Attack against Stable Diffusion" by Haomin Zhuang, Yihua Zhang and Sijia Liuβ26Updated 7 months ago
- This is for releasing the source code of the ACSAC paper "STRIP: A Defence Against Trojan Attacks on Deep Neural Networks"β55Updated 4 months ago
- [ICLR 2024] Inducing High Energy-Latency of Large Vision-Language Models with Verbose Imagesβ32Updated last year
- The official implementation of "Intellectual Property Protection of Diffusion Models via the Watermark Diffusion Process"β21Updated last month
- Official Pytorch implementation for "Transferable Adversarial Attacks on Vision Transformers with Token Gradient Regularization" (CVPR 20β¦β26Updated last year
- A curated list of papers for the transferability of adversarial examplesβ63Updated 8 months ago
- Safety at Scale: A Comprehensive Survey of Large Model Safetyβ129Updated last month
- β32Updated 5 months ago
- A paper list for localized adversarial patch researchβ148Updated last year
- β21Updated 9 months ago
- CVPR 2021 Official repository for the Data-Free Model Extraction paper. https://arxiv.org/abs/2011.14779β71Updated last year