zaixizhang / CBD
Official Inplementation of CVPR23 paper "Backdoor Defense via Deconfounded Representation Learning"
☆24Updated last year
Related projects: ⓘ
- ☆27Updated 2 years ago
- Towards Stable Backdoor Purification through Feature Shift Tuning (NeurIPS 2023)☆22Updated last month
- This repository is the official implementation of the paper "ASSET: Robust Backdoor Data Detection Across a Multiplicity of Deep Learning…☆16Updated last year
- This is the repository that introduces research topics related to protecting intellectual property (IP) of AI from a data-centric perspec…☆21Updated 10 months ago
- Codes for reproducing the results of the paper "Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness" published at IC…☆26Updated 4 years ago
- Code for the paper "BadPrompt: Backdoor Attacks on Continuous Prompts"☆32Updated 2 months ago
- Implementation for Poison Attacks against Text Datasets with Conditional Adversarially Regularized Autoencoder (EMNLP-Findings 2020)☆15Updated 3 years ago
- "Challenging Forgets: Unveiling the Worst-Case Forget Sets in Machine Unlearning" by Chongyu Fan*, Jiancheng Liu*, Alfred Hero, Sijia Liu☆14Updated 2 months ago
- Defending Against Backdoor Attacks Using Robust Covariance Estimation☆20Updated 3 years ago
- This repo is the official implementation of the ICLR'23 paper "Towards Robustness Certification Against Universal Perturbations." We calc…☆12Updated last year
- Reconstructive Neuron Pruning for Backdoor Defense (ICML 2023)☆25Updated 8 months ago
- ☆11Updated 2 years ago
- The official implementation of USENIX Security'23 paper "Meta-Sift" -- Ten minutes or less to find a 1000-size or larger clean subset on …☆16Updated last year
- ☆18Updated 4 months ago
- [Preprint] On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping☆10Updated 4 years ago
- This is the implementation for CVPR 2022 Oral paper "Better Trigger Inversion Optimization in Backdoor Scanning."☆23Updated 2 years ago
- Not All Poisons are Created Equal: Robust Training against Data Poisoning (ICML 2022)☆17Updated 2 years ago
- [NeurIPS'22] Trap and Replace: Defending Backdoor Attacks by Trapping Them into an Easy-to-Replace Subnetwork. Haotao Wang, Junyuan Hong,…☆13Updated 9 months ago
- Code relative to "Adversarial robustness against multiple and single $l_p$-threat models via quick fine-tuning of robust classifiers"☆15Updated last year
- ☆27Updated 2 years ago
- RAB: Provable Robustness Against Backdoor Attacks☆39Updated 11 months ago
- Robustify Black-Box Models (ICLR'22 - Spotlight)☆24Updated last year
- Camouflage poisoning via machine unlearning☆14Updated last year
- Codes for NeurIPS 2021 paper "Adversarial Neuron Pruning Purifies Backdoored Deep Models"☆52Updated last year
- ☆17Updated 3 years ago
- Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation (NeurIPS 2022)☆33Updated last year
- [NeurIPS 2022] "Randomized Channel Shuffling: Minimal-Overhead Backdoor Attack Detection without Clean Datasets" by Ruisi Cai*, Zhenyu Zh…☆18Updated last year
- [ICLR'21] Dataset Inference for Ownership Resolution in Machine Learning☆30Updated last year
- ☆12Updated 4 months ago
- Code&Data for the paper "Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents"☆29Updated 3 months ago