HanxunH / Detect-CLIP-Backdoor-SamplesLinks
[ICLR2025] Detecting Backdoor Samples in Contrastive Language Image Pretraining
☆13Updated 9 months ago
Alternatives and similar repositories for Detect-CLIP-Backdoor-Samples
Users that are interested in Detect-CLIP-Backdoor-Samples are comparing it to the libraries listed below
Sorting:
- ☆24Updated 3 years ago
- Code for the paper "Overconfidence is a Dangerous Thing: Mitigating Membership Inference Attacks by Enforcing Less Confident Prediction" …☆11Updated 2 years ago
- RAB: Provable Robustness Against Backdoor Attacks☆39Updated 2 years ago
- [NeurIPS2023] Black-box Backdoor Defense via Zero-shot Image Purification☆14Updated 2 years ago
- ☆19Updated 4 years ago
- Official Code Implementation for the CCS 2022 Paper "On the Privacy Risks of Cell-Based NAS Architectures"☆11Updated 3 years ago
- ☆13Updated 4 years ago
- [NeurIPS 2022] "Randomized Channel Shuffling: Minimal-Overhead Backdoor Attack Detection without Clean Datasets" by Ruisi Cai*, Zhenyu Zh…☆20Updated 3 years ago
- Official Implementation of NIPS 2022 paper Pre-activation Distributions Expose Backdoor Neurons☆15Updated 2 years ago
- Code repository for the paper --- [USENIX Security 2023] Towards A Proactive ML Approach for Detecting Backdoor Poison Samples☆30Updated 2 years ago
- ☆19Updated 3 years ago
- This is the official implementation of our paper 'Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protecti…☆58Updated last year
- ☆12Updated 3 years ago
- [S&P'24] Test-Time Poisoning Attacks Against Test-Time Adaptation Models☆18Updated 9 months ago
- Defending against Model Stealing via Verifying Embedded External Features☆38Updated 3 years ago
- ☆32Updated 3 years ago
- Code Repository for the Paper ---Revisiting the Assumption of Latent Separability for Backdoor Defenses (ICLR 2023)☆44Updated 2 years ago
- [CVPR 2023] Backdoor Defense via Adaptively Splitting Poisoned Dataset☆48Updated last year
- Code for identifying natural backdoors in existing image datasets.☆15Updated 3 years ago
- official implementation of Towards Robust Model Watermark via Reducing Parametric Vulnerability☆15Updated last year
- PyTorch implementation of our ICLR 2023 paper titled "Is Adversarial Training Really a Silver Bullet for Mitigating Data Poisoning?".☆12Updated 2 years ago
- Code for paper: PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models, IEEE ICASSP 2024. Demo//124.220.228.133:11107☆18Updated last year
- ICCV 2021 papers and code focus on adversarial attacks and defense☆11Updated 4 years ago
- SaTML 2023, 1st place in CVPR’21 Security AI Challenger: Unrestricted Adversarial Attacks on ImageNet.☆27Updated 2 years ago
- Official repository for CVPR'23 paper: Detecting Backdoors in Pre-trained Encoders☆35Updated 2 years ago
- Backdoor Safety Tuning (NeurIPS 2023 & 2024 Spotlight)☆27Updated last year
- ☆26Updated 2 years ago
- The implementatin of our ICLR 2021 work: Targeted Attack against Deep Neural Networks via Flipping Limited Weight Bits☆18Updated 4 years ago
- Code for paper "PatchGuard: A Provably Robust Defense against Adversarial Patches via Small Receptive Fields and Masking"☆72Updated 3 years ago
- This is the source code for HufuNet. Our paper is accepted by the IEEE TDSC.☆26Updated 2 years ago