This is a simple backdoor model for federated learning.We use MNIST as the original data set for data attack and we use CIFAR-10 data set for backdoor model in the model attack.This is a brief paper recurrence for "How To Backdoor Federated Learning? "
☆14Jun 19, 2020Updated 5 years ago
Alternatives and similar repositories for Federated-Learning-Backdoor-Example-with-MNIST-and-CIFAR-10
Users that are interested in Federated-Learning-Backdoor-Example-with-MNIST-and-CIFAR-10 are comparing it to the libraries listed below
Sorting:
- This is the documentation of the Tensorflow/Keras implementation of Latent Backdoor Attacks. Please see the paper for details Latent Back…☆22Sep 8, 2021Updated 4 years ago
- DBA: Distributed Backdoor Attacks against Federated Learning (ICLR 2020)☆203Aug 5, 2021Updated 4 years ago
- ☆38Apr 9, 2021Updated 4 years ago
- Source code of FedAttack.☆11Feb 9, 2022Updated 4 years ago
- How Robust are Randomized Smoothing based Defenses to Data Poisoning? (CVPR 2021)☆14Jul 16, 2021Updated 4 years ago
- ICML 2022 code for "Neurotoxin: Durable Backdoors in Federated Learning" https://arxiv.org/abs/2206.10341☆83Apr 1, 2023Updated 2 years ago
- [NeurIPS 2021] Source code for the paper "Qu-ANTI-zation: Exploiting Neural Network Quantization for Achieving Adversarial Outcomes"☆18Nov 9, 2021Updated 4 years ago
- A paper summary of Backdoor Attack against Neural Network☆13Aug 9, 2019Updated 6 years ago
- A sybil-resilient distributed learning protocol.☆112Sep 9, 2025Updated 5 months ago
- Code for "On the Trade-off between Adversarial and Backdoor Robustness" (NIPS 2020)☆17Nov 11, 2020Updated 5 years ago
- Camouflage poisoning via machine unlearning☆19Jul 3, 2025Updated 8 months ago
- Source code for paper "How to Backdoor Federated Learning" (https://arxiv.org/abs/1807.00459)