csl-cqu / awesome-secure-federated-learning-papers
☆27Updated last year
Related projects: ⓘ
- Code for Exploiting Unintended Feature Leakage in Collaborative Learning (in Oakland 2019)☆53Updated 5 years ago
- Code for the paper: Label-Only Membership Inference Attacks☆61Updated 3 years ago
- ☆45Updated 4 years ago
- Code for ML Doctor☆84Updated last month
- ☆38Updated 3 years ago
- Privacy attacks on Split Learning☆37Updated 2 years ago
- Privacy Risks of Securing Machine Learning Models against Adversarial Examples☆44Updated 4 years ago
- Learning from history for Byzantine Robustness☆21Updated 3 years ago
- ☆22Updated 3 years ago
- ☆50Updated last year
- Code for Machine Learning Models that Remember Too Much (in CCS 2017)☆30Updated 6 years ago
- ☆31Updated 4 years ago
- ☆63Updated 2 years ago
- ☆10Updated last year
- Code for the paper "ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models"☆80Updated 2 years ago
- Official implementation of "Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective"☆52Updated last year
- Official implementation of "RelaxLoss: Defending Membership Inference Attacks without Losing Utility" (ICLR 2022)☆45Updated 2 years ago
- Code to reproduce experiments in "Antipodes of Label Differential Privacy PATE and ALIBI"☆29Updated 2 years ago
- Membership Inference, Attribute Inference and Model Inversion attacks implemented using PyTorch.☆54Updated last year
- verifying machine unlearning by backdooring☆18Updated last year
- [CCS 2021] "DataLens: Scalable Privacy Preserving Training via Gradient Compression and Aggregation" by Boxin Wang*, Fan Wu*, Yunhui Long…☆37Updated 2 years ago
- Improved DP-SGD for optimizing☆16Updated 5 years ago
- This is a python script to generate nice bibtex file for latex.☆15Updated 4 years ago
- Membership Inference Attacks and Defenses in Neural Network Pruning☆25Updated 2 years ago
- A pytorch implementation of the paper "Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage".☆55Updated last year
- The code of AAAI-21 paper titled "Defending against Backdoors in Federated Learning with Robust Learning Rate".☆30Updated last year
- Official implementation of "When Machine Unlearning Jeopardizes Privacy" (ACM CCS 2021)☆45Updated 2 years ago
- reveal the vulnerabilities of SplitNN☆30Updated 2 years ago
- ☆60Updated 3 years ago
- The code for our Updates-Leak paper☆17Updated 4 years ago