spencerwooo / torchattackLinks
🛡 A curated list of adversarial attacks in PyTorch, with a focus on transferable black-box attacks.
☆62Updated last month
Alternatives and similar repositories for torchattack
Users that are interested in torchattack are comparing it to the libraries listed below
Sorting:
- ☆15Updated last year
- TransferAttack is a pytorch framework to boost the adversarial transferability for image classification.☆386Updated this week
- Convert tensorflow model to pytorch model via [MMdnn](https://github.com/microsoft/MMdnn) for adversarial attacks.☆90Updated 2 years ago
- ☆223Updated 2 weeks ago
- This is the official implementation of our paper 'Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protecti…☆57Updated last year
- A comprehensive toolbox for model inversion attacks and defenses, which is easy to get started.☆183Updated 5 months ago
- A list of recent papers about adversarial learning☆204Updated last week
- Invisible Backdoor Attack with Sample-Specific Triggers☆97Updated 3 years ago
- A curated list of papers & resources on backdoor attacks and defenses in deep learning.☆218Updated last year
- ☆115Updated 3 months ago
- SampDetox: Black-box Backdoor Defense via Perturbation-based Sample Detoxification☆12Updated 2 months ago
- ☆82Updated 4 years ago
- WaNet - Imperceptible Warping-based Backdoor Attack (ICLR 2021)☆128Updated 9 months ago
- A list of papers in NeurIPS 2022 related to adversarial attack and defense / AI security.☆71Updated 2 years ago
- Simple PyTorch implementations of Badnets on MNIST and CIFAR10.☆181Updated 2 years ago
- Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks (IEEE S&P 2024)☆34Updated 2 months ago
- This is for releasing the source code of the ACSAC paper "STRIP: A Defence Against Trojan Attacks on Deep Neural Networks"☆58Updated 9 months ago
- Official Repository for the AAAI-20 paper "Hidden Trigger Backdoor Attacks"☆129Updated last year
- ☆102Updated last year
- [NeurIPS-2023] Annual Conference on Neural Information Processing Systems☆210Updated 8 months ago
- 😎 up-to-date & curated list of awesome Attacks on Large-Vision-Language-Models papers, methods & resources.☆371Updated 3 weeks ago
- 复现了下Neural Cleanse这篇论文,真的是简单而有效,发在了okaland☆30Updated 4 years ago
- ☆533Updated 2 months ago
- A curated list of papers & resources linked to data poisoning, backdoor attacks and defenses against them (no longer maintained)☆269Updated 7 months ago
- This is the code repository of our submission: Understanding the Dark Side of LLMs’ Intrinsic Self-Correction.☆62Updated 8 months ago
- ☆25Updated 2 years ago
- ABS: Scanning Neural Networks for Back-doors by Artificial Brain Stimulation☆51Updated 3 years ago
- This Github repository summarizes a list of research papers on AI security from the four top academic conferences.☆146Updated 3 months ago
- Repository for the Paper (AAAI 2024, Oral) --- Visual Adversarial Examples Jailbreak Large Language Models☆234Updated last year
- [ICLR 2024] Inducing High Energy-Latency of Large Vision-Language Models with Verbose Images☆38Updated last year