illidanlab / inversion-influence-function
Official codes for "Understanding Deep Gradient Leakage via Inversion Influence Functions", NeurIPS 2023
☆16Updated last year
Alternatives and similar repositories for inversion-influence-function:
Users that are interested in inversion-influence-function are comparing it to the libraries listed below
- [ICLR2024] "Backdoor Federated Learning by Poisoning Backdoor-Critical Layers"☆33Updated 4 months ago
- ☆25Updated 2 years ago
- ☆68Updated 2 years ago
- Implementation of the paper : "Membership Inference Attacks Against Machine Learning Models", Shokri et al.☆60Updated 5 years ago
- ICML 2022 code for "Neurotoxin: Durable Backdoors in Federated Learning" https://arxiv.org/abs/2206.10341☆72Updated 2 years ago
- [USENIX Security 2022] Mitigating Membership Inference Attacks by Self-Distillation Through a Novel Ensemble Architecture☆16Updated 2 years ago
- A pytorch implementation of the paper "Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage".☆57Updated 2 years ago
- ☆10Updated 3 years ago
- [ICLR 2023, Best Paper Award at ECCV’22 AROW Workshop] FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning☆55Updated 4 months ago
- ☆54Updated 2 years ago
- Official implementation of "RelaxLoss: Defending Membership Inference Attacks without Losing Utility" (ICLR 2022)☆49Updated 2 years ago
- Official implementation of "Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective"☆55Updated last year
- ☆27Updated last year
- [ICML 2023] Are Diffusion Models Vulnerable to Membership Inference Attacks?☆34Updated 7 months ago
- Pytorch implementation of backdoor unlearning.☆17Updated 2 years ago
- A Fine-grained Differentially Private Federated Learning against Leakage from Gradients☆14Updated 2 years ago
- ☆19Updated last year
- Codes for NeurIPS 2021 paper "Adversarial Neuron Pruning Purifies Backdoored Deep Models"☆57Updated last year
- ☆38Updated 4 years ago
- The code of AAAI-21 paper titled "Defending against Backdoors in Federated Learning with Robust Learning Rate".☆33Updated 2 years ago
- CRFL: Certifiably Robust Federated Learning against Backdoor Attacks (ICML 2021)☆73Updated 3 years ago
- 🔒 Implementation of Shokri et al(2016) "Membership Inference Attacks against Machine Learning Models"☆34Updated 2 years ago
- ☆21Updated 3 years ago
- Camouflage poisoning via machine unlearning☆17Updated 2 years ago
- ☆24Updated 3 years ago
- R-GAP: Recursive Gradient Attack on Privacy [Accepted at ICLR 2021]☆36Updated 2 years ago
- ☆19Updated 7 months ago
- [ICML 2022 / ICLR 2024] Source code for our papers "Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks" and "Be C…☆42Updated 8 months ago
- Privacy attacks on Split Learning☆40Updated 3 years ago
- [CVPRW'22] A privacy attack that exploits Adversarial Training models to compromise the privacy of Federated Learning systems.☆12Updated 2 years ago