Jiaqi0602 / adversarial-attack-from-leakageLinks
From Gradient Leakage to Adversarial Attacks in Federated Learning
☆16Updated 4 years ago
Alternatives and similar repositories for adversarial-attack-from-leakage
Users that are interested in adversarial-attack-from-leakage are comparing it to the libraries listed below
Sorting:
- The code for "Improved Deep Leakage from Gradients" (iDLG).☆165Updated 4 years ago
- Membership Inference Attacks and Defenses in Neural Network Pruning☆28Updated 3 years ago
- R-GAP: Recursive Gradient Attack on Privacy [Accepted at ICLR 2021]☆37Updated 2 years ago
- [CVPRW'22] A privacy attack that exploits Adversarial Training models to compromise the privacy of Federated Learning systems.☆11Updated 3 years ago
- Algorithms to recover input data from their gradient signal through a neural network☆311Updated 2 years ago
- A pytorch implementation of the paper "Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage".☆62Updated 3 years ago
- ☆24Updated 3 years ago
- ☆31Updated 5 years ago
- Query-Efficient Data-Free Learning from Black-Box Models☆23Updated 2 years ago
- InstaHide: Instance-hiding Schemes for Private Distributed Learning☆50Updated 5 years ago
- CRFL: Certifiably Robust Federated Learning against Backdoor Attacks (ICML 2021)☆74Updated 4 years ago
- ☆21Updated 4 years ago
- Official implementation of "GS-WGAN: A Gradient-Sanitized Approach for Learning Differentially Private Generators" (NeurIPS 2020)☆70Updated 3 years ago
- [NeurIPS 2019] This is the code repo of our novel passport-based DNN ownership verification schemes, i.e. we embed passport layer into va…☆84Updated 2 years ago
- Official implementation of "RelaxLoss: Defending Membership Inference Attacks without Losing Utility" (ICLR 2022)☆48Updated 3 years ago
- ☆73Updated 3 years ago
- Breaching privacy in federated learning scenarios for vision and text☆312Updated last week
- [NeurIPS 2024] Official implementation of the paper “Ferrari: Federated Feature Unlearning via Optimizing Feature Sensitivity"☆23Updated 4 months ago
- Instance-wise Batch Label Restoration via Gradients In Federated Learning (ICLR 2023)☆11Updated 2 years ago
- The code for our Updates-Leak paper☆17Updated 5 years ago
- The code of AAAI-21 paper titled "Defending against Backdoors in Federated Learning with Robust Learning Rate".☆35Updated 3 years ago
- Official Repository for ResSFL (accepted by CVPR '22)☆26Updated 3 years ago
- ☆80Updated 3 years ago
- [CCS 2021] "DataLens: Scalable Privacy Preserving Training via Gradient Compression and Aggregation" by Boxin Wang*, Fan Wu*, Yunhui Long…☆36Updated 4 years ago
- [ICLR2023] Distilling Cognitive Backdoor Patterns within an Image☆36Updated 3 months ago
- Code to reproduce experiments in "Antipodes of Label Differential Privacy PATE and ALIBI"☆32Updated 3 years ago
- Official implementation of "When Machine Unlearning Jeopardizes Privacy" (ACM CCS 2021)☆50Updated 3 years ago
- Differentially Private Diffusion Models☆105Updated 2 years ago
- ☆32Updated last year
- Official implementation of "Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective"☆57Updated 2 years ago