hepucuncao / Membership-Inference-AttackLinks
☆9Updated 6 months ago
Alternatives and similar repositories for Membership-Inference-Attack
Users that are interested in Membership-Inference-Attack are comparing it to the libraries listed below
Sorting:
- ☆52Updated 2 years ago
- IBA: Towards Irreversible Backdoor Attacks in Federated Learning (Poster at NeurIPS 2023)☆36Updated last year
- ☆31Updated last year
- ☆23Updated last year
- Code & supplementary material of the paper Label Inference Attacks Against Federated Learning on Usenix Security 2022.☆83Updated 2 years ago
- paper code☆27Updated 4 years ago
- ☆16Updated last year
- ☆14Updated last year
- [Usenix Security 2024] Official code implementation of "BackdoorIndicator: Leveraging OOD Data for Proactive Backdoor Detection in Federa…☆39Updated 9 months ago
- [ICML 2023] Official code implementation of "Chameleon: Adapting to Peer Images for Planting Durable Backdoors in Federated Learning (htt…☆42Updated 6 months ago
- Code for Data Poisoning Attacks Against Federated Learning Systems☆195Updated 4 years ago
- ☆31Updated 4 years ago
- Github Repo for AAAI 2023 paper: On the Vulnerability of Backdoor Defenses for Federated Learning☆38Updated 2 years ago
- ☆336Updated 2 weeks ago
- ☆26Updated last year
- The implementatioin code of paper: “A Practical Clean-Label Backdoor Attack with Limited Information in Vertical Federated Learning”☆11Updated 2 years ago
- ☆25Updated 4 years ago
- reproduce the FLTrust model based on the paper "FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping"☆30Updated 2 years ago
- Paper notes and code for differentially private machine learning☆357Updated 7 months ago
- Backdoor detection in Federated learning with similarity measurement☆23Updated 3 years ago
- DBA: Distributed Backdoor Attacks against Federated Learning (ICLR 2020)☆196Updated 3 years ago
- Official Implementation of "Lurking in the shadows: Unveiling Stealthy Backdoor Attacks against Personalized Federated Learning"☆10Updated 5 months ago
- Code for NDSS 2021 Paper "Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses Against Federated Learning"☆147Updated 2 years ago
- FLPoison: Benchmarking Poisoning Attacks and Defenses in Federated Learning☆30Updated 4 months ago
- ☆39Updated last year
- DPSUR☆26Updated 5 months ago
- The code of the attack scheme in the paper "Backdoor Attack Against Split Neural Network-Based Vertical Federated Learning"☆19Updated last year
- The Algorithmic Foundations of Differential Pivacy by Cynthia Dwork Chinese Translation☆164Updated 2 years ago
- Multi-metrics adaptively identifies backdoors in Federated learning☆27Updated last week
- Webank AI☆41Updated 4 months ago