guanjiyang / SACLinks
☆18Updated 3 years ago
Alternatives and similar repositories for SAC
Users that are interested in SAC are comparing it to the libraries listed below
Sorting:
- ☆18Updated 4 years ago
- ☆10Updated last year
- ICCV 2021, We find most existing triggers of backdoor attacks in deep learning contain severe artifacts in the frequency domain. This Rep…☆46Updated 3 years ago
- ☆25Updated 2 years ago
- [CCS'22] SSLGuard: A Watermarking Scheme for Self-supervised Learning Pre-trained Encoders☆19Updated 3 years ago
- ☆45Updated 2 years ago
- Code for Transferable Unlearnable Examples☆23Updated 2 years ago
- [ICLR'21] Dataset Inference for Ownership Resolution in Machine Learning☆32Updated 3 years ago
- [CVPR 2023] The official implementation of our CVPR 2023 paper "Detecting Backdoors During the Inference Stage Based on Corruption Robust…☆23Updated 2 years ago
- Github repo for One-shot Neural Backdoor Erasing via Adversarial Weight Masking (NeurIPS 2022)☆15Updated 2 years ago
- ☆20Updated 3 years ago
- Code for the paper "Autoregressive Perturbations for Data Poisoning" (NeurIPS 2022)☆20Updated last year
- APBench: A Unified Availability Poisoning Attack and Defenses Benchmark (TMLR 08/2024)☆37Updated 8 months ago
- ☆32Updated 3 years ago
- ☆27Updated 3 years ago
- A curated list of papers for the transferability of adversarial examples☆75Updated last year
- [AAAI 2024] Data-Free Hard-Label Robustness Stealing Attack☆15Updated last year
- [ICLR2023] Distilling Cognitive Backdoor Patterns within an Image☆36Updated last month
- ☆27Updated 2 years ago
- This is an official repository for Practical Membership Inference Attacks Against Large-Scale Multi-Modal Models: A Pilot Study (ICCV2023…☆24Updated 2 years ago
- Code for paper: "PromptCARE: Prompt Copyright Protection by Watermark Injection and Verification", IEEE S&P 2024.☆33Updated last year
- The official code of IEEE S&P 2024 paper "Why Does Little Robustness Help? A Further Step Towards Understanding Adversarial Transferabili…☆20Updated last year
- Official Implementation for "Towards Reliable Verification of Unauthorized Data Usage in Personalized Text-to-Image Diffusion Models" (IE…☆26Updated 8 months ago
- All code and data necessary to replicate experiments in the paper BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Model…☆13Updated last year
- ☆26Updated last year
- ☆54Updated 4 years ago
- Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks☆18Updated 6 years ago
- Code for Prior-Guided Adversarial Initialization for Fast Adversarial Training (ECCV2022)☆28Updated 3 years ago
- Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation (NeurIPS 2022)☆33Updated 3 years ago
- Source code for ECCV 2022 Poster: Data-free Backdoor Removal based on Channel Lipschitzness☆34Updated 2 years ago