sutd-visual-computing-group / Re-thinking_MILinks
[CVPR-2023] Re-thinking Model Inversion Attacks Against Deep Neural Networks
☆41Updated last year
Alternatives and similar repositories for Re-thinking_MI
Users that are interested in Re-thinking_MI are comparing it to the libraries listed below
Sorting:
- [ICML 2022 / ICLR 2024] Source code for our papers "Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks" and "Be C…☆45Updated 2 months ago
- ☆45Updated 2 years ago
- [KDD 2022] "Bilateral Dependency Optimization: Defending Against Model-inversion Attacks"☆24Updated 3 weeks ago
- [ICML 2023] Are Diffusion Models Vulnerable to Membership Inference Attacks?☆41Updated last year
- [ICCV-2023] Gradient inversion attack, Federated learning, Generative adversarial network.☆45Updated last year
- ☆27Updated 3 years ago
- [arXiv:2411.10023] "Model Inversion Attacks: A Survey of Approaches and Countermeasures"☆202Updated 4 months ago
- Implementation of the paper "MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation".☆30Updated 3 years ago
- CVPR 2021 Official repository for the Data-Free Model Extraction paper. https://arxiv.org/abs/2011.14779☆73Updated last year
- [ICLR2023] Distilling Cognitive Backdoor Patterns within an Image☆36Updated 3 weeks ago
- Anti-Backdoor learning (NeurIPS 2021)☆84Updated 2 years ago
- A comprehensive toolbox for model inversion attacks and defenses, which is easy to get started.☆182Updated 2 weeks ago
- A pytorch implementation of the paper "Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage".☆59Updated 2 years ago
- Code Repository for the Paper ---Revisiting the Assumption of Latent Separability for Backdoor Defenses (ICLR 2023)☆44Updated 2 years ago
- Invisible Backdoor Attack with Sample-Specific Triggers☆100Updated 3 years ago
- Source code for ECCV 2022 Poster: Data-free Backdoor Removal based on Channel Lipschitzness☆34Updated 2 years ago
- ☆32Updated 4 years ago
- WaNet - Imperceptible Warping-based Backdoor Attack (ICLR 2021)☆129Updated 10 months ago
- Official Repository for the AAAI-20 paper "Hidden Trigger Backdoor Attacks"☆130Updated last year
- [ICLR2021] Unlearnable Examples: Making Personal Data Unexploitable☆170Updated last year
- Official implementation of "RelaxLoss: Defending Membership Inference Attacks without Losing Utility" (ICLR 2022)☆48Updated 3 years ago
- Input Purification Defense Against Trojan Attacks on Deep Neural Network Systems☆28Updated 4 years ago
- ☆26Updated 3 years ago
- ☆59Updated 2 years ago
- [AAAI 2023] Pseudo Label-Guided Model Inversion Attack via Conditional Generative Adversarial Network☆30Updated 11 months ago
- Codes for NeurIPS 2021 paper "Adversarial Neuron Pruning Purifies Backdoored Deep Models"☆58Updated 2 years ago
- Code for the paper: Label-Only Membership Inference Attacks☆66Updated 4 years ago
- ICCV 2021, We find most existing triggers of backdoor attacks in deep learning contain severe artifacts in the frequency domain. This Rep…☆45Updated 3 years ago
- [NeurIPS23 (Spotlight)] "Model Sparsity Can Simplify Machine Unlearning" by Jinghan Jia*, Jiancheng Liu*, Parikshit Ram, Yuguang Yao, Gao…☆81Updated last year
- A curated list of papers for the transferability of adversarial examples☆72Updated last year