Zhang-Henry / INACTIVELinks
The official implementation of CVPR 2025 paper "Invisible Backdoor Attack against Self-supervised Learning"
☆17Updated 7 months ago
Alternatives and similar repositories for INACTIVE
Users that are interested in INACTIVE are comparing it to the libraries listed below
Sorting:
- Code for paper: PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models, IEEE ICASSP 2024. Demo//124.220.228.133:11107☆20Updated last year
- An Embarrassingly Simple Backdoor Attack on Self-supervised Learning☆20Updated 2 years ago
- ☆30Updated last year
- ☆14Updated 11 months ago
- This is the repository that introduces research topics related to protecting intellectual property (IP) of AI from a data-centric perspec…☆23Updated 2 years ago
- [CVPR 2023] Backdoor Defense via Adaptively Splitting Poisoned Dataset☆49Updated last year
- ☆14Updated 2 years ago
- Implementation of BadCLIP https://arxiv.org/pdf/2311.16194.pdf☆23Updated last year
- Backdoor Safety Tuning (NeurIPS 2023 & 2024 Spotlight)☆27Updated last year
- [NeurIPS'25] Backdoor Cleaning without External Guidance in MLLM Fine-tuning☆17Updated 4 months ago
- Reconstructive Neuron Pruning for Backdoor Defense (ICML 2023)☆39Updated 2 years ago
- ☆12Updated 3 years ago
- ☆14Updated 3 years ago
- The code implementation of MuScleLoRA (Accepted in ACL 2024)☆10Updated last year
- [CCS-LAMPS'24] LLM IP Protection Against Model Merging☆16Updated last year
- The official implementation of USENIX Security'23 paper "Meta-Sift" -- Ten minutes or less to find a 1000-size or larger clean subset on …☆20Updated 2 years ago
- The Oyster series is a set of safety models developed in-house by Alibaba-AAIG, devoted to building a responsible AI ecosystem. | Oyster …☆59Updated 5 months ago
- Official Implementation of NIPS 2022 paper Pre-activation Distributions Expose Backdoor Neurons☆15Updated 3 years ago
- [NeurIPS2023] Black-box Backdoor Defense via Zero-shot Image Purification☆16Updated 2 years ago
- Code for the CVPR '23 paper, "Defending Against Patch-based Backdoor Attacks on Self-Supervised Learning"☆10Updated 2 years ago
- ☆25Updated 10 months ago
- Code for the paper "Overconfidence is a Dangerous Thing: Mitigating Membership Inference Attacks by Enforcing Less Confident Prediction" …☆12Updated 2 years ago
- ☆12Updated 3 years ago
- official implementation of Towards Robust Model Watermark via Reducing Parametric Vulnerability☆16Updated last year
- Official Inplementation of CVPR23 paper "Backdoor Defense via Deconfounded Representation Learning"☆25Updated 2 years ago
- ☆14Updated last year
- Official repository for Targeted Unlearning with Single Layer Unlearning Gradient (SLUG), ICML 2025☆14Updated 6 months ago
- ☆13Updated last year
- ☆34Updated 3 years ago
- ☆10Updated last year