LukasStruppek / Plug-and-Play-Attacks
[ICML 2022 / ICLR 2024] Source code for our papers "Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks" and "Be Careful What You Smooth For".
☆38Updated 3 months ago
Related projects ⓘ
Alternatives and complementary repositories for Plug-and-Play-Attacks
- ☆41Updated last year
- [KDD 2022] "Bilateral Dependency Optimization: Defending Against Model-inversion Attacks"☆22Updated 10 months ago
- [ICLR'21] Dataset Inference for Ownership Resolution in Machine Learning☆31Updated 2 years ago
- Official implementation of "RelaxLoss: Defending Membership Inference Attacks without Losing Utility" (ICLR 2022)☆46Updated 2 years ago
- Defending against Model Stealing via Verifying Embedded External Features☆32Updated 2 years ago
- [ICLR2023] Distilling Cognitive Backdoor Patterns within an Image☆31Updated last month
- [CVPR-2023] Re-thinking Model Inversion Attacks Against Deep Neural Networks☆36Updated last year
- [ICML 2023] Are Diffusion Models Vulnerable to Membership Inference Attacks?☆31Updated 2 months ago
- ICCV 2021, We find most existing triggers of backdoor attacks in deep learning contain severe artifacts in the frequency domain. This Rep…☆41Updated 2 years ago
- Code for "Variational Model Inversion Attacks" Wang et al., NeurIPS2021☆20Updated 2 years ago
- Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''☆50Updated 2 years ago
- ☆23Updated 2 years ago
- Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation (NeurIPS 2022)☆33Updated last year
- This is the repository that introduces research topics related to protecting intellectual property (IP) of AI from a data-centric perspec…☆22Updated last year
- ☆28Updated 2 years ago
- Code Repository for the Paper ---Revisiting the Assumption of Latent Separability for Backdoor Defenses (ICLR 2023)☆34Updated last year
- CVPR 2021 Official repository for the Data-Free Model Extraction paper. https://arxiv.org/abs/2011.14779☆69Updated 7 months ago
- Backdoor Safety Tuning (NeurIPS 2023 & 2024 Spotlight)☆24Updated this week
- Anti-Backdoor learning (NeurIPS 2021)☆78Updated last year
- Github repo for One-shot Neural Backdoor Erasing via Adversarial Weight Masking (NeurIPS 2022)☆14Updated last year
- The official implementation of our CVPR 2023 paper "Detecting Backdoors During the Inference Stage Based on Corruption Robustness Consist…☆19Updated last year
- [CVPRW'22] A privacy attack that exploits Adversarial Training models to compromise the privacy of Federated Learning systems.☆12Updated 2 years ago
- Implementation of the paper "MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation".☆28Updated 2 years ago
- Official Repository for ResSFL (accepted by CVPR '22)☆19Updated 2 years ago
- ☆22Updated last year
- ☆17Updated 2 years ago
- Code for Backdoor Attacks Against Dataset Distillation☆30Updated last year
- ☆24Updated 3 years ago
- code release for "Unrolling SGD: Understanding Factors Influencing Machine Unlearning" published at EuroS&P'22☆22Updated 2 years ago
- Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks☆17Updated 5 years ago