This is a simple backdoor model for federated learning.We use MNIST as the original data set for data attack and we use CIFAR-10 data set for backdoor model in the model attack.This is a brief paper recurrence for "How To Backdoor Federated Learning? "
☆14Jun 19, 2020Updated 5 years ago
Alternatives and similar repositories for Federated-Learning-Backdoor-Example-with-MNIST-and-CIFAR-10
Users that are interested in Federated-Learning-Backdoor-Example-with-MNIST-and-CIFAR-10 are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆38Apr 9, 2021Updated 4 years ago
- DBA: Distributed Backdoor Attacks against Federated Learning (ICLR 2020)☆203Aug 5, 2021Updated 4 years ago
- This is the documentation of the Tensorflow/Keras implementation of Latent Backdoor Attacks. Please see the paper for details Latent Back…☆22Sep 8, 2021Updated 4 years ago
- Source code of FedAttack.☆11Feb 9, 2022Updated 4 years ago
- ICML 2022 code for "Neurotoxin: Durable Backdoors in Federated Learning" https://arxiv.org/abs/2206.10341☆83Apr 1, 2023Updated 2 years ago
- A sybil-resilient distributed learning protocol.☆112Sep 9, 2025Updated 6 months ago
- Source code for paper "How to Backdoor Federated Learning" (https://arxiv.org/abs/1807.00459)☆314Jul 25, 2024Updated last year
- [NeurIPS 2021] Source code for the paper "Qu-ANTI-zation: Exploiting Neural Network Quantization for Achieving Adversarial Outcomes"☆18Nov 9, 2021Updated 4 years ago
- Code for Data Poisoning Attacks Against Federated Learning Systems☆206Jun 13, 2021Updated 4 years ago
- CRFL: Certifiably Robust Federated Learning against Backdoor Attacks (ICML 2021)☆74Aug 5, 2021Updated 4 years ago
- The official code of KDD22 paper "FLDetecotor: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clien…☆86Feb 23, 2023Updated 3 years ago
- ☆73Jun 7, 2022Updated 3 years ago
- How Robust are Randomized Smoothing based Defenses to Data Poisoning? (CVPR 2021)☆14Jul 16, 2021Updated 4 years ago
- Code for "Analyzing Federated Learning through an Adversarial Lens" https://arxiv.org/abs/1811.12470☆153Oct 3, 2022Updated 3 years ago
- Code for "On the Trade-off between Adversarial and Backdoor Robustness" (NIPS 2020)☆17Nov 11, 2020Updated 5 years ago
- Backdoors Framework for Deep Learning and Federated Learning. A light-weight tool to conduct your research on backdoors.☆379Feb 5, 2023Updated 3 years ago
- Robust aggregation for federated learning with the RFA algorithm.☆54Sep 13, 2022Updated 3 years ago
- ☆19Jun 21, 2021Updated 4 years ago
- A paper summary of Backdoor Attack against Neural Network☆13Aug 9, 2019Updated 6 years ago
- Official implementation of "FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective"…☆44Oct 29, 2021Updated 4 years ago
- Adversarial attacks and defenses against federated learning.☆20May 24, 2023Updated 2 years ago
- ☆13Sep 12, 2021Updated 4 years ago
- The reproduction of the paper Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning.☆63Feb 2, 2023Updated 3 years ago
- AdvDoor: Adversarial Backdoor Attack of Deep Learning System☆32Nov 5, 2024Updated last year
- This repository contains PyTorch implementation of the paper ''LFighter: Defending against Label-flipping Attacks in Federated Learning''…☆18Mar 6, 2026Updated 2 weeks ago
- Camouflage poisoning via machine unlearning☆19Jul 3, 2025Updated 8 months ago
- Robust Differentially Private Training of Deep Neural Networks☆12Dec 10, 2020Updated 5 years ago
- TextHide: Tackling Data Privacy in Language Understanding Tasks☆30Apr 19, 2021Updated 4 years ago
- This is the repository for the work "An ensemble mechanism to tackle the heterogeneity in asynchronous federated learning"☆11Nov 19, 2021Updated 4 years ago
- [CVPR 2021] Scalability vs. Utility: Do We Have to Sacrifice One for the Other in Data Importance Quantification?☆34Dec 26, 2020Updated 5 years ago
- Target Agnostic Attack on Deep Models: Exploiting Security Vulnerabilities of Transfer Learning☆10Jul 2, 2019Updated 6 years ago
- Input-aware Dynamic Backdoor Attack (NeurIPS 2020)☆38Jul 22, 2024Updated last year
- ☆21Oct 25, 2021Updated 4 years ago
- The code for the "Dynamic Backdoor Attacks Against Machine Learning Models" paper☆16Nov 20, 2023Updated 2 years ago
- ☆18Feb 2, 2022Updated 4 years ago
- 本项目演示联邦学习方法☆11Aug 1, 2019Updated 6 years ago
- Differential priavcy based federated learning framework by various neural networks and svm using PyTorch.☆46Nov 28, 2022Updated 3 years ago
- Backdoor detection in Federated learning with similarity measurement☆26Apr 30, 2022Updated 3 years ago
- ☆10Dec 8, 2018Updated 7 years ago