TTitcombe / Model-Inversion-SplitNN
Research into model inversion on SplitNN
☆14Updated last year
Alternatives and similar repositories for Model-Inversion-SplitNN:
Users that are interested in Model-Inversion-SplitNN are comparing it to the libraries listed below
- Code for Exploiting Unintended Feature Leakage in Collaborative Learning (in Oakland 2019)☆53Updated 5 years ago
- Privacy attacks on Split Learning☆41Updated 3 years ago
- reveal the vulnerabilities of SplitNN☆31Updated 2 years ago
- PyTorch implementation of NoPeekNN☆16Updated 4 years ago
- ☆44Updated 3 years ago
- Official implementation of "Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective"☆56Updated 2 years ago
- ☆14Updated last year
- The code of AAAI-21 paper titled "Defending against Backdoors in Federated Learning with Robust Learning Rate".☆33Updated 2 years ago
- ☆69Updated 2 years ago
- CRFL: Certifiably Robust Federated Learning against Backdoor Attacks (ICML 2021)☆73Updated 3 years ago
- ☆29Updated 2 years ago
- ☆36Updated 3 years ago
- ☆54Updated 3 years ago
- ☆31Updated 5 years ago
- [Usenix Security 2024] Official code implementation of "BackdoorIndicator: Leveraging OOD Data for Proactive Backdoor Detection in Federa…☆36Updated 7 months ago
- Robust aggregation for federated learning with the RFA algorithm.☆48Updated 2 years ago
- Code for "Analyzing Federated Learning through an Adversarial Lens" https://arxiv.org/abs/1811.12470☆150Updated 2 years ago
- paper code☆26Updated 4 years ago
- [CCS 2021] "DataLens: Scalable Privacy Preserving Training via Gradient Compression and Aggregation" by Boxin Wang*, Fan Wu*, Yunhui Long…☆37Updated 3 years ago
- ☆55Updated 2 years ago
- Privacy Risks of Securing Machine Learning Models against Adversarial Examples☆44Updated 5 years ago
- Code to reproduce experiments in "Antipodes of Label Differential Privacy PATE and ALIBI"☆31Updated 3 years ago
- Official implementation of "FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective"…☆39Updated 3 years ago
- Code for the paper "ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models"☆84Updated 3 years ago
- ICML 2022 code for "Neurotoxin: Durable Backdoors in Federated Learning" https://arxiv.org/abs/2206.10341☆74Updated 2 years ago
- The official code of KDD22 paper "FLDetecotor: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clien…☆82Updated 2 years ago
- Adversarial attacks and defenses against federated learning.☆17Updated last year
- ☆10Updated 3 years ago
- Code for AAAI 2021 Paper "Membership Privacy for Machine Learning Models Through Knowledge Transfer"☆11Updated 4 years ago
- Implementation of BapFL: You can Backdoor Attack Personalized Federated Learning☆13Updated last year