sigma0-advx / sigma-zero
☆12Updated last month
Alternatives and similar repositories for sigma-zero
Users that are interested in sigma-zero are comparing it to the libraries listed below
Sorting:
- Attack benchmark repository☆14Updated 3 weeks ago
- SecML-Torch: A Library for Robustness Evaluation of Deep Learning Models☆49Updated 2 months ago
- Source code for the Energy-Latency Attacks via Sponge Poisoning paper.☆15Updated 3 years ago
- Foolbox implementation for NeurIPS 2021 Paper: "Fast Minimum-norm Adversarial Attacks through Adaptive Norm Constraints".☆25Updated 3 years ago
- ☆51Updated 3 years ago
- Code for "On Adaptive Attacks to Adversarial Example Defenses"☆87Updated 4 years ago
- Revisiting Transferable Adversarial Images (arXiv)☆122Updated 2 months ago
- This repository provides simple PyTorch implementations for adversarial training methods on CIFAR-10.☆165Updated 4 years ago
- Code repository for CVPR2024 paper 《Pre-trained Model Guided Fine-Tuning for Zero-Shot Adversarial Robustness》☆20Updated 11 months ago
- Library containing PyTorch implementations of various adversarial attacks and resources☆155Updated 2 weeks ago
- A paper list for localized adversarial patch research☆148Updated last year
- Attacking a dog vs fish classification that uses transfer learning inceptionV3☆70Updated 7 years ago
- A toolbox for backdoor attacks.☆22Updated 2 years ago
- Fantastic Robustness Measures: The Secrets of Robust Generalization [NeurIPS 2023]☆40Updated 4 months ago
- ☆51Updated 3 years ago
- A curated list of papers for the transferability of adversarial examples☆66Updated 10 months ago
- ☆81Updated 3 years ago
- [ECCV 2024] Towards Reliable Evaluation and Fast Training of Robust Semantic Segmentation Models☆19Updated 10 months ago
- [ICML 2023] Are Diffusion Models Vulnerable to Membership Inference Attacks?☆34Updated 8 months ago
- Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks☆17Updated 6 years ago
- This is the implementation for CVPR 2022 Oral paper "Better Trigger Inversion Optimization in Backdoor Scanning."☆24Updated 3 years ago
- Implements Adversarial Examples for Semantic Segmentation and Object Detection, using PyTorch and Detectron2☆50Updated 4 years ago
- ☆19Updated 2 years ago
- Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching☆102Updated 8 months ago
- A Leaderboard for Certifiable Robustness against Adversarial Patch Attacks☆21Updated last year
- This is the source code for Data-free Backdoor. Our paper is accepted by the 32nd USENIX Security Symposium (USENIX Security 2023).☆30Updated last year
- Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks (ICLR '20)☆30Updated 4 years ago
- Implementations of data poisoning attacks against neural networks and related defenses.☆85Updated 10 months ago
- ☆26Updated 2 years ago
- [CVPR'24] LOTUS: Evasive and Resilient Backdoor Attacks through Sub-Partitioning☆14Updated 4 months ago