guanjiyang / SACLinks
☆18Updated 2 years ago
Alternatives and similar repositories for SAC
Users that are interested in SAC are comparing it to the libraries listed below
Sorting:
- ☆21Updated 2 years ago
- [ICLR2023] Distilling Cognitive Backdoor Patterns within an Image☆36Updated 10 months ago
- ☆27Updated 2 years ago
- Code for Transferable Unlearnable Examples☆20Updated 2 years ago
- [CCS'22] SSLGuard: A Watermarking Scheme for Self-supervised Learning Pre-trained Encoders☆20Updated 3 years ago
- APBench: A Unified Availability Poisoning Attack and Defenses Benchmark (TMLR 08/2024)☆33Updated 4 months ago
- Github repo for One-shot Neural Backdoor Erasing via Adversarial Weight Masking (NeurIPS 2022)☆15Updated 2 years ago
- ☆18Updated 3 years ago
- Reconstructive Neuron Pruning for Backdoor Defense (ICML 2023)☆39Updated last year
- ☆45Updated last year
- [ICLR'21] Dataset Inference for Ownership Resolution in Machine Learning☆32Updated 2 years ago
- [CVPR 2023] The official implementation of our CVPR 2023 paper "Detecting Backdoors During the Inference Stage Based on Corruption Robust…☆23Updated 2 years ago
- ☆31Updated 3 years ago
- ☆82Updated 4 years ago
- CVPR 2025 - Anyattack: Towards Large-scale Self-supervised Adversarial Attacks on Vision-language Models☆43Updated 3 weeks ago
- Backdooring Multimodal Learning☆29Updated 2 years ago
- Official implementation of the ICCV2023 paper: Enhancing Generalization of Universal Adversarial Perturbation through Gradient Aggregatio…☆26Updated 2 years ago
- A curated list of papers for the transferability of adversarial examples☆73Updated last year
- ☆24Updated 2 years ago
- ☆13Updated 3 years ago
- Simple yet effective targeted transferable attack (NeurIPS 2021)☆51Updated 2 years ago
- Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks☆17Updated 6 years ago
- ☆114Updated 3 months ago
- Code for paper: PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models, IEEE ICASSP 2024. Demo//124.220.228.133:11107☆17Updated last year
- Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''☆53Updated 2 years ago
- [AAAI 2024] Data-Free Hard-Label Robustness Stealing Attack☆15Updated last year
- ☆24Updated last year
- An Embarrassingly Simple Backdoor Attack on Self-supervised Learning☆17Updated last year
- Defending against Model Stealing via Verifying Embedded External Features☆38Updated 3 years ago
- Code for Prior-Guided Adversarial Initialization for Fast Adversarial Training (ECCV2022)☆26Updated 2 years ago