☆18Oct 7, 2022Updated 3 years ago
Alternatives and similar repositories for SAC
Users that are interested in SAC are comparing it to the libraries listed below
Sorting:
- Code for paper: "RemovalNet: DNN model fingerprinting removal attack", IEEE TDSC 2023.☆10Nov 27, 2023Updated 2 years ago
- ICCV 2023 - AdaptGuard: Defending Against Universal Attacks for Model Adaptation☆11Dec 23, 2023Updated 2 years ago
- ☆19Jun 27, 2021Updated 4 years ago
- ☆10Mar 20, 2023Updated 2 years ago
- Unlearnable Examples Give a False Sense of Security: Piercing through Unexploitable Data with Learnable Examples☆11Oct 14, 2024Updated last year
- ☆50Feb 27, 2021Updated 5 years ago
- ☆10Jul 28, 2022Updated 3 years ago
- Implementation of Self-supervised-Online-Adversarial-Purification☆13Aug 2, 2021Updated 4 years ago
- The official pytorch implementation of ACM MM 19 paper "MetaAdvDet: Towards Robust Detection of Evolving Adversarial Attacks"☆11Jun 7, 2021Updated 4 years ago
- Code release for DeepJudge (S&P'22)☆52Mar 14, 2023Updated 2 years ago
- [ICLR'21] Dataset Inference for Ownership Resolution in Machine Learning☆32Oct 10, 2022Updated 3 years ago
- [CVPR'24] LOTUS: Evasive and Resilient Backdoor Attacks through Sub-Partitioning☆15Jan 15, 2025Updated last year
- [Preprint] On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping☆10Feb 27, 2020Updated 6 years ago
- Official code for ICML 2024 paper, "Connecting the Dots: Collaborative Fine-tuning for Black-Box Vision-Language Models"☆19Jun 12, 2024Updated last year
- The code for the "Dynamic Backdoor Attacks Against Machine Learning Models" paper☆16Nov 20, 2023Updated 2 years ago
- ☆16Jul 17, 2022Updated 3 years ago
- Github repo for One-shot Neural Backdoor Erasing via Adversarial Weight Masking (NeurIPS 2022)☆15Jan 3, 2023Updated 3 years ago
- Defending against Model Stealing via Verifying Embedded External Features☆38Feb 19, 2022Updated 4 years ago
- ☆19Mar 26, 2022Updated 3 years ago
- CVPR 2025 - R-TPT: Improving Adversarial Robustness of Vision-Language Models through Test-Time Prompt Tuning☆21Aug 28, 2025Updated 6 months ago
- Code for paper: PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models, IEEE ICASSP 2024. Demo//124.220.228.133:11107☆20Aug 10, 2024Updated last year
- [CCS'22] SSLGuard: A Watermarking Scheme for Self-supervised Learning Pre-trained Encoders☆18Jul 12, 2022Updated 3 years ago
- The official repository of ECCV 2024 paper "Outlier-Aware Test-time Adaptation with Stable Memory Replay"☆19Jun 8, 2025Updated 8 months ago
- NN 2023☆23Nov 9, 2022Updated 3 years ago
- Code for "CloudLeak: Large-Scale Deep Learning Models Stealing Through Adversarial Examples" (NDSS 2020)☆22Nov 14, 2020Updated 5 years ago
- CNN-based fast source device identification☆24Sep 5, 2022Updated 3 years ago
- Example of the attack described in the paper "Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization"☆21Oct 25, 2019Updated 6 years ago
- Pytorch deep learning object detection using CINIC-10 dataset.☆22Feb 26, 2020Updated 6 years ago
- ☆94Mar 23, 2021Updated 4 years ago
- [CVPR 2023] The official implementation of our CVPR 2023 paper "Detecting Backdoors During the Inference Stage Based on Corruption Robust…☆24May 25, 2023Updated 2 years ago
- ☆26Dec 1, 2022Updated 3 years ago
- ☆28Jun 17, 2024Updated last year
- ☆24Apr 14, 2019Updated 6 years ago
- ☆28Aug 21, 2023Updated 2 years ago
- ☆26Jan 11, 2023Updated 3 years ago
- Implementation of "Adversarial Frontier Stitching for Remote Neural Network Watermarking" in TensorFlow.☆24Aug 30, 2021Updated 4 years ago
- This is the source code for MEA-Defender. Our paper is accepted by the IEEE Symposium on Security and Privacy (S&P) 2024.☆29Nov 19, 2023Updated 2 years ago
- code we used in Decision Boundary Analysis of Adversarial Examples https://openreview.net/forum?id=BkpiPMbA-☆29Oct 17, 2018Updated 7 years ago
- The official implementation of the IEEE S&P`22 paper "SoK: How Robust is Deep Neural Network Image Classification Watermarking".☆117May 24, 2023Updated 2 years ago