HanxunH / Detect-CLIP-Backdoor-SamplesLinks
[ICLR2025] Detecting Backdoor Samples in Contrastive Language Image Pretraining
☆18Updated 11 months ago
Alternatives and similar repositories for Detect-CLIP-Backdoor-Samples
Users that are interested in Detect-CLIP-Backdoor-Samples are comparing it to the libraries listed below
Sorting:
- Code repository for the paper --- [USENIX Security 2023] Towards A Proactive ML Approach for Detecting Backdoor Poison Samples☆30Updated 2 years ago
- official implementation of Towards Robust Model Watermark via Reducing Parametric Vulnerability☆16Updated last year
- ICCV 2021 papers and code focus on adversarial attacks and defense☆11Updated 4 years ago
- ☆25Updated 3 years ago
- Code for paper: PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models, IEEE ICASSP 2024. Demo//124.220.228.133:11107☆19Updated last year
- ☆34Updated 3 years ago
- [CVPR 2023] Backdoor Defense via Adaptively Splitting Poisoned Dataset☆49Updated last year
- SaTML 2023, 1st place in CVPR’21 Security AI Challenger: Unrestricted Adversarial Attacks on ImageNet.☆27Updated 3 years ago
- [NeurIPS 2022] "Randomized Channel Shuffling: Minimal-Overhead Backdoor Attack Detection without Clean Datasets" by Ruisi Cai*, Zhenyu Zh…☆21Updated 3 years ago
- ☆12Updated 3 years ago
- ☆13Updated 4 years ago
- ☆26Updated 3 years ago
- Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation (NeurIPS 2022)☆33Updated 3 years ago
- Official Implementation of NIPS 2022 paper Pre-activation Distributions Expose Backdoor Neurons☆15Updated 3 years ago
- [NeurIPS2023] Black-box Backdoor Defense via Zero-shot Image Purification☆16Updated 2 years ago
- Data-Efficient Backdoor Attacks☆20Updated 3 years ago
- Code for identifying natural backdoors in existing image datasets.☆15Updated 3 years ago
- ☆14Updated 11 months ago
- Code for paper: "RemovalNet: DNN model fingerprinting removal attack", IEEE TDSC 2023.☆10Updated 2 years ago
- Code Repository for the Paper ---Revisiting the Assumption of Latent Separability for Backdoor Defenses (ICLR 2023)☆47Updated 2 years ago
- [S&P'24] Test-Time Poisoning Attacks Against Test-Time Adaptation Models☆19Updated 11 months ago
- ☆27Updated 3 years ago
- Official implementation of the CVPR 2022 paper "Backdoor Attacks on Self-Supervised Learning".☆76Updated 2 years ago
- ☆14Updated 3 years ago
- ☆23Updated 5 years ago
- Unlearnable Examples Give a False Sense of Security: Piercing through Unexploitable Data with Learnable Examples☆11Updated last year
- This is the official implementation of our paper 'Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protecti…☆58Updated last year
- Universal Adversarial Perturbations (UAPs) for PyTorch☆49Updated 4 years ago
- Code for AAAI 2021 "Towards Feature Space Adversarial Attack".☆30Updated 4 years ago
- Defending against Model Stealing via Verifying Embedded External Features☆38Updated 3 years ago