guanjiyang / SAC
☆17Updated 2 years ago
Alternatives and similar repositories for SAC:
Users that are interested in SAC are comparing it to the libraries listed below
- The official implementation of our CVPR 2023 paper "Detecting Backdoors During the Inference Stage Based on Corruption Robustness Consist…☆21Updated last year
- [CCS'22] SSLGuard: A Watermarking Scheme for Self-supervised Learning Pre-trained Encoders☆18Updated 2 years ago
- Github repo for One-shot Neural Backdoor Erasing via Adversarial Weight Masking (NeurIPS 2022)☆14Updated 2 years ago
- Code for the paper "Autoregressive Perturbations for Data Poisoning" (NeurIPS 2022)☆18Updated 4 months ago
- An Embarrassingly Simple Backdoor Attack on Self-supervised Learning☆17Updated 11 months ago
- Implementation of BadCLIP https://arxiv.org/pdf/2311.16194.pdf☆18Updated 9 months ago
- ☆18Updated 2 years ago
- ☆17Updated 3 years ago
- Backdoor Safety Tuning (NeurIPS 2023 & 2024 Spotlight)☆25Updated 2 months ago
- ☆24Updated last year
- ☆16Updated 8 months ago
- ☆41Updated last year
- Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''☆54Updated 2 years ago
- [ICLR2023] Distilling Cognitive Backdoor Patterns within an Image☆32Updated 2 months ago
- A toolbox for backdoor attacks.☆20Updated 2 years ago
- ☆30Updated 2 years ago
- Code for paper: "PromptCARE: Prompt Copyright Protection by Watermark Injection and Verification", IEEE S&P 2024.☆30Updated 5 months ago
- [CVPR 2023] Backdoor Defense via Adaptively Splitting Poisoned Dataset☆46Updated 9 months ago
- The official implementation of the paper "Free Fine-tuning: A Plug-and-Play Watermarking Scheme for Deep Neural Networks".☆17Updated 8 months ago
- Source code for ECCV 2022 Poster: Data-free Backdoor Removal based on Channel Lipschitzness☆29Updated 2 years ago
- Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks☆17Updated 5 years ago
- [ICLR'21] Dataset Inference for Ownership Resolution in Machine Learning☆32Updated 2 years ago
- ICCV 2021, We find most existing triggers of backdoor attacks in deep learning contain severe artifacts in the frequency domain. This Rep…☆43Updated 2 years ago
- ☆20Updated last year
- [MM'23 Oral] "Text-to-image diffusion models can be easily backdoored through multimodal data poisoning"☆25Updated this week
- Implementation of the paper "MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation".☆29Updated 3 years ago
- ☆19Updated 2 years ago
- Code for Transferable Unlearnable Examples☆17Updated last year
- Code for paper: PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models, IEEE ICASSP 2024. Demo//124.220.228.133:11107☆14Updated 5 months ago
- Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation (NeurIPS 2022)☆33Updated 2 years ago