congxie1108 / icml2019_zeno
☆15Updated 5 years ago
Alternatives and similar repositories for icml2019_zeno:
Users that are interested in icml2019_zeno are comparing it to the libraries listed below
- A list of papers using/about Federated Learning especially malicious client and attacks.☆12Updated 4 years ago
- Code for "Analyzing Federated Learning through an Adversarial Lens" https://arxiv.org/abs/1811.12470☆147Updated 2 years ago
- Robust aggregation for federated learning with the RFA algorithm.☆47Updated 2 years ago
- Concentrated Differentially Private Gradient Descent with Adaptive per-iteration Privacy Budget☆49Updated 6 years ago
- This is a simple backdoor model for federated learning.We use MNIST as the original data set for data attack and we use CIFAR-10 data set…☆14Updated 4 years ago
- ☆38Updated 3 years ago
- CRFL: Certifiably Robust Federated Learning against Backdoor Attacks (ICML 2021)☆71Updated 3 years ago
- Learning from history for Byzantine Robustness☆22Updated 3 years ago
- An implementation for the paper "A Little Is Enough: Circumventing Defenses For Distributed Learning" (NeurIPS 2019)☆26Updated last year
- This is the code for our paper `Robust Federated Learning with Attack-Adaptive Aggregation' accepted by FTL-IJCAI'21.☆44Updated last year
- Code for NDSS 2021 Paper "Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses Against Federated Learning"☆140Updated 2 years ago
- Implementation of calibration bounds for differential privacy in the shuffle model☆23Updated 4 years ago
- A sybil-resilient distributed learning protocol.☆100Updated last year
- ☆54Updated 2 years ago
- ☆31Updated 4 years ago
- Amortized version of the differentially private SGD algorithm published in "Deep Learning with Differential Privacy" by Abadi et al. Enfo…☆41Updated 10 months ago
- The official code of KDD22 paper "FLDetecotor: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clien…☆74Updated last year
- Membership Inference, Attribute Inference and Model Inversion attacks implemented using PyTorch.☆58Updated 4 months ago
- ☆14Updated last year
- ☆68Updated 2 years ago
- Code to reproduce experiments in "Antipodes of Label Differential Privacy PATE and ALIBI"☆30Updated 2 years ago
- Webank AI☆40Updated this week
- DBA: Distributed Backdoor Attacks against Federated Learning (ICLR 2020)☆183Updated 3 years ago
- 基于《A Little Is Enough: Circumventing Defenses For Distributed Learning》的联邦学习攻击模型☆62Updated 4 years ago
- Official implementation of "FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective"…☆39Updated 3 years ago
- [CCS 2021] "DataLens: Scalable Privacy Preserving Training via Gradient Compression and Aggregation" by Boxin Wang*, Fan Wu*, Yunhui Long…☆37Updated 3 years ago
- Code for Exploiting Unintended Feature Leakage in Collaborative Learning (in Oakland 2019)☆53Updated 5 years ago
- simple Differential Privacy in PyTorch☆48Updated 4 years ago
- ☆18Updated 4 years ago
- code for TPDS paper "Towards Fair and Privacy-Preserving Federated Deep Models"☆31Updated 2 years ago