ebagdasa / backdoor_federated_learning
Source code for paper "How to Backdoor Federated Learning" (https://arxiv.org/abs/1807.00459)
☆287Updated 6 months ago
Alternatives and similar repositories for backdoor_federated_learning:
Users that are interested in backdoor_federated_learning are comparing it to the libraries listed below
- DBA: Distributed Backdoor Attacks against Federated Learning (ICLR 2020)☆183Updated 3 years ago
- Code for "Analyzing Federated Learning through an Adversarial Lens" https://arxiv.org/abs/1811.12470☆147Updated 2 years ago
- Code for Data Poisoning Attacks Against Federated Learning Systems☆180Updated 3 years ago
- Code for NDSS 2021 Paper "Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses Against Federated Learning"☆140Updated 2 years ago
- Implementation of dp-based federated learning framework using PyTorch☆292Updated last year
- The code for "Improved Deep Leakage from Gradients" (iDLG).☆146Updated 3 years ago
- [NeurIPS 2019 FL workshop] Federated Learning with Local and Global Representations☆230Updated 6 months ago
- Simulate a federated setting and run differentially private federated learning.☆366Updated 6 months ago
- A sybil-resilient distributed learning protocol.☆100Updated last year
- On the Convergence of FedAvg on Non-IID Data☆257Updated 2 years ago
- ☆171Updated 3 months ago
- Code for Membership Inference Attack against Machine Learning Models (in Oakland 2017)☆193Updated 7 years ago
- Algorithms to recover input data from their gradient signal through a neural network☆278Updated last year
- Standard federated learning implementations in FedLab and FL benchmarks.☆155Updated last year
- Ditto: Fair and Robust Federated Learning Through Personalization (ICML '21)☆139Updated 2 years ago
- Privacy Preserving Vertical Federated Learning☆216Updated last year
- Clustered Federated Learning: Model-Agnostic Distributed Multi-Task Optimization under Privacy Constraints☆163Updated 3 years ago
- CRFL: Certifiably Robust Federated Learning against Backdoor Attacks (ICML 2021)☆71Updated 3 years ago
- Code and data accompanying the FedGen paper☆249Updated 3 months ago
- This project's goal is to evaluate the privacy leakage of differentially private machine learning models.☆130Updated 2 years ago
- FedMD: Heterogenous Federated Learning via Model Distillation☆150Updated 3 years ago
- An open source FL implement with dataset(Femnist, Shakespeare, MNIST, Cifar-10 and Fashion-Mnist) using pytorch☆121Updated last year
- Differential Privacy Preservation in Deep Learning under Model Attacks☆132Updated 3 years ago
- A Simulator for Privacy Preserving Federated Learning☆93Updated 4 years ago
- ☆156Updated 2 years ago
- [NeurIPS 2019] Deep Leakage From Gradients☆424Updated 2 years ago
- 基于《A Little Is Enough: Circumventing Defenses For Distributed Learning》的联邦学习攻击模型☆62Updated 4 years ago
- Backdoors Framework for Deep Learning and Federated Learning. A light-weight tool to conduct your research on backdoors.☆350Updated 2 years ago
- ⚔️ Blades: A Unified Benchmark Suite for Attacks and Defenses in Federated Learning☆139Updated 6 months ago
- ICML 2022 code for "Neurotoxin: Durable Backdoors in Federated Learning" https://arxiv.org/abs/2206.10341☆68Updated last year