csl-cqu / awesome-secure-federated-learning-papers
☆29Updated 2 years ago
Alternatives and similar repositories for awesome-secure-federated-learning-papers:
Users that are interested in awesome-secure-federated-learning-papers are comparing it to the libraries listed below
- Code for Exploiting Unintended Feature Leakage in Collaborative Learning (in Oakland 2019)☆53Updated 5 years ago
- Code for the paper: Label-Only Membership Inference Attacks☆65Updated 3 years ago
- The code of AAAI-21 paper titled "Defending against Backdoors in Federated Learning with Robust Learning Rate".☆33Updated 2 years ago
- FLTracer: Accurate Poisoning Attack Provenance in Federated Learning☆21Updated 10 months ago
- Code for ML Doctor☆87Updated 8 months ago
- [ICLR 2023, Best Paper Award at ECCV’22 AROW Workshop] FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning☆55Updated 4 months ago
- ☆45Updated 5 years ago
- ☆28Updated 2 years ago
- ☆68Updated 2 years ago
- ☆25Updated 3 years ago
- Code for the paper "ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models"☆83Updated 3 years ago
- ☆31Updated 5 years ago
- CRFL: Certifiably Robust Federated Learning against Backdoor Attacks (ICML 2021)☆73Updated 3 years ago
- ☆54Updated 2 years ago
- [CCS 2021] "DataLens: Scalable Privacy Preserving Training via Gradient Compression and Aggregation" by Boxin Wang*, Fan Wu*, Yunhui Long…☆37Updated 3 years ago
- Learning from history for Byzantine Robustness☆23Updated 3 years ago
- Official implementation of "When Machine Unlearning Jeopardizes Privacy" (ACM CCS 2021)☆47Updated 2 years ago
- Privacy Risks of Securing Machine Learning Models against Adversarial Examples☆44Updated 5 years ago
- ICML 2022 code for "Neurotoxin: Durable Backdoors in Federated Learning" https://arxiv.org/abs/2206.10341☆73Updated 2 years ago
- CVPR 2021 Official repository for the Data-Free Model Extraction paper. https://arxiv.org/abs/2011.14779☆71Updated last year
- Input Purification Defense Against Trojan Attacks on Deep Neural Network Systems☆27Updated 4 years ago
- ☆38Updated 4 years ago
- Code for Machine Learning Models that Remember Too Much (in CCS 2017)☆30Updated 7 years ago
- Membership Inference Attacks and Defenses in Neural Network Pruning☆28Updated 2 years ago
- Official implementation of "Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective"☆55Updated last year
- ☆31Updated 7 months ago
- ABS: Scanning Neural Networks for Back-doors by Artificial Brain Stimulation☆50Updated 2 years ago
- Privacy attacks on Split Learning☆40Updated 3 years ago
- A pytorch implementation of the paper "Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage".☆57Updated 2 years ago
- [ICML 2023] Official code implementation of "Chameleon: Adapting to Peer Images for Planting Durable Backdoors in Federated Learning (htt…☆40Updated 3 months ago