csl-cqu / awesome-secure-federated-learning-papersLinks
☆29Updated 2 years ago
Alternatives and similar repositories for awesome-secure-federated-learning-papers
Users that are interested in awesome-secure-federated-learning-papers are comparing it to the libraries listed below
Sorting:
- Code for Exploiting Unintended Feature Leakage in Collaborative Learning (in Oakland 2019)☆53Updated 6 years ago
- Code for ML Doctor☆91Updated 11 months ago
- Code for the paper "ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models"☆85Updated 3 years ago
- ☆45Updated 5 years ago
- Code for the paper: Label-Only Membership Inference Attacks☆66Updated 3 years ago
- Code & supplementary material of the paper Label Inference Attacks Against Federated Learning on Usenix Security 2022.☆83Updated 2 years ago
- Privacy attacks on Split Learning☆42Updated 3 years ago
- ☆70Updated 3 years ago
- Systematic Evaluation of Membership Inference Privacy Risks of Machine Learning Models☆127Updated last year
- Learning from history for Byzantine Robustness☆24Updated 4 years ago
- Official implementation of "When Machine Unlearning Jeopardizes Privacy" (ACM CCS 2021)☆48Updated 3 years ago
- ☆25Updated 4 years ago
- CRFL: Certifiably Robust Federated Learning against Backdoor Attacks (ICML 2021)☆73Updated 3 years ago
- The code for "Improved Deep Leakage from Gradients" (iDLG).☆152Updated 4 years ago
- ☆55Updated 2 years ago
- ICML 2022 code for "Neurotoxin: Durable Backdoors in Federated Learning" https://arxiv.org/abs/2206.10341☆73Updated 2 years ago
- Official implementation of "Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective"☆57Updated 2 years ago
- [ICLR 2023, Best Paper Award at ECCV’22 AROW Workshop] FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning☆57Updated 7 months ago
- The code of AAAI-21 paper titled "Defending against Backdoors in Federated Learning with Robust Learning Rate".☆34Updated 2 years ago
- Code for Membership Inference Attack against Machine Learning Models (in Oakland 2017)☆194Updated 7 years ago
- FLTracer: Accurate Poisoning Attack Provenance in Federated Learning☆22Updated last year
- Official implementation of "RelaxLoss: Defending Membership Inference Attacks without Losing Utility" (ICLR 2022)☆50Updated 2 years ago
- This project's goal is to evaluate the privacy leakage of differentially private machine learning models.☆135Updated 2 years ago
- This is a python script to generate nice bibtex file for latex.☆16Updated 5 years ago
- Membership Inference Attacks and Defenses in Neural Network Pruning☆28Updated 3 years ago
- Code to reproduce experiments in "Antipodes of Label Differential Privacy PATE and ALIBI"☆32Updated 3 years ago
- Privacy Risks of Securing Machine Learning Models against Adversarial Examples☆44Updated 5 years ago
- paper code☆27Updated 4 years ago
- Membership Inference, Attribute Inference and Model Inversion attacks implemented using PyTorch.☆62Updated 9 months ago
- Membership inference against Federated learning.☆9Updated 4 years ago