SCLBD / Effective_backdoor_defenseView external linksLinks
☆14Oct 7, 2022Updated 3 years ago
Alternatives and similar repositories for Effective_backdoor_defense
Users that are interested in Effective_backdoor_defense are comparing it to the libraries listed below
Sorting:
- ☆27Nov 9, 2022Updated 3 years ago
- ☆12May 6, 2022Updated 3 years ago
- [NeurIPS 2022] "Randomized Channel Shuffling: Minimal-Overhead Backdoor Attack Detection without Clean Datasets" by Ruisi Cai*, Zhenyu Zh…☆21Oct 1, 2022Updated 3 years ago
- ☆27Feb 1, 2023Updated 3 years ago
- ☆14Feb 26, 2025Updated 11 months ago
- ☆10Oct 31, 2022Updated 3 years ago
- The implementation of our IEEE S&P 2024 paper "Securely Fine-tuning Pre-trained Encoders Against Adversarial Examples".☆11Jun 28, 2024Updated last year
- Code Repository for the Paper ---Revisiting the Assumption of Latent Separability for Backdoor Defenses (ICLR 2023)☆47Feb 28, 2023Updated 2 years ago
- Code for the CVPR '23 paper, "Defending Against Patch-based Backdoor Attacks on Self-Supervised Learning"☆10Jun 9, 2023Updated 2 years ago
- Code repository for the paper --- [USENIX Security 2023] Towards A Proactive ML Approach for Detecting Backdoor Poison Samples☆30Jul 11, 2023Updated 2 years ago
- [ICLR 2024] "Data Distillation Can Be Like Vodka: Distilling More Times For Better Quality" by Xuxi Chen*, Yu Yang*, Zhangyang Wang, Baha…☆15May 18, 2024Updated last year
- Official Implementation of NIPS 2022 paper Pre-activation Distributions Expose Backdoor Neurons☆15Jan 13, 2023Updated 3 years ago
- ☆20Oct 28, 2025Updated 3 months ago
- PyTorch implementation of our ICLR 2023 paper titled "Is Adversarial Training Really a Silver Bullet for Mitigating Data Poisoning?".☆12Mar 13, 2023Updated 2 years ago
- ☆13Oct 21, 2021Updated 4 years ago
- Github repo for One-shot Neural Backdoor Erasing via Adversarial Weight Masking (NeurIPS 2022)☆15Jan 3, 2023Updated 3 years ago
- [CVPR 2024] "Data Poisoning based Backdoor Attacks to Contrastive Learning": official code implementation.☆16Feb 10, 2025Updated last year
- official implementation of Towards Robust Model Watermark via Reducing Parametric Vulnerability☆16Jun 3, 2024Updated last year
- The code of AAAI-21 paper titled "Defending against Backdoors in Federated Learning with Robust Learning Rate".☆35Oct 3, 2022Updated 3 years ago
- ☆19Jun 5, 2023Updated 2 years ago
- This is an implementation demo of the IJCAI 2022 paper [Eliminating Backdoor Triggers for Deep Neural Networks Using Attention Relation …☆21Nov 9, 2024Updated last year
- [NDSS'23] BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense☆17May 7, 2024Updated last year
- ☆19Jun 21, 2021Updated 4 years ago
- Code for identifying natural backdoors in existing image datasets.☆15Aug 24, 2022Updated 3 years ago
- [NeurIPS 2021] Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training☆32Jan 9, 2022Updated 4 years ago
- [MM '24] EvilEdit: Backdooring Text-to-Image Diffusion Models in One Second☆27Nov 19, 2024Updated last year
- The official implementation of USENIX Security'23 paper "Meta-Sift" -- Ten minutes or less to find a 1000-size or larger clean subset on …☆20Apr 27, 2023Updated 2 years ago
- [S&P'24] Test-Time Poisoning Attacks Against Test-Time Adaptation Models☆19Feb 18, 2025Updated 11 months ago
- ☆54Sep 11, 2021Updated 4 years ago
- An Embarrassingly Simple Backdoor Attack on Self-supervised Learning☆20Jan 24, 2024Updated 2 years ago
- ☆59Nov 24, 2022Updated 3 years ago
- Codes for NeurIPS 2021 paper "Adversarial Neuron Pruning Purifies Backdoored Deep Models"☆63May 8, 2023Updated 2 years ago
- ☆28Jun 12, 2023Updated 2 years ago
- Official Implementation for: "RAW: A Robust and Agile Plug-and-Play Watermark Framework for AI-Generated Images (Videos) with Provable Gu…☆36Oct 30, 2024Updated last year
- ☆11Dec 23, 2024Updated last year
- Implementation of "Adversarial purification with Score-based generative models", ICML 2021☆30Oct 24, 2021Updated 4 years ago
- Official repository of the paper: Marking Code Without Breaking It: Code Watermarking for Detecting LLM-Generated Code☆12Oct 7, 2025Updated 4 months ago
- [ICLR 2023, Spotlight] Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning☆33Dec 2, 2023Updated 2 years ago
- A Watermark-Conditioned Diffusion Model for IP Protection (ECCV 2024)☆34Apr 5, 2025Updated 10 months ago