JingHongyi / Federated-Learning-Backdoor-Example-with-MNIST-and-CIFAR-10View external linksLinks
This is a simple backdoor model for federated learning.We use MNIST as the original data set for data attack and we use CIFAR-10 data set for backdoor model in the model attack.This is a brief paper recurrence for "How To Backdoor Federated Learning? "
☆14Jun 19, 2020Updated 5 years ago
Alternatives and similar repositories for Federated-Learning-Backdoor-Example-with-MNIST-and-CIFAR-10
Users that are interested in Federated-Learning-Backdoor-Example-with-MNIST-and-CIFAR-10 are comparing it to the libraries listed below
Sorting:
- This is the documentation of the Tensorflow/Keras implementation of Latent Backdoor Attacks. Please see the paper for details Latent Back…☆21Sep 8, 2021Updated 4 years ago
- DBA: Distributed Backdoor Attacks against Federated Learning (ICLR 2020)☆202Aug 5, 2021Updated 4 years ago
- ☆37Apr 9, 2021Updated 4 years ago
- Source code of FedAttack.☆11Feb 9, 2022Updated 4 years ago
- How Robust are Randomized Smoothing based Defenses to Data Poisoning? (CVPR 2021)☆14Jul 16, 2021Updated 4 years ago
- ICML 2022 code for "Neurotoxin: Durable Backdoors in Federated Learning" https://arxiv.org/abs/2206.10341☆83Apr 1, 2023Updated 2 years ago
- A paper summary of Backdoor Attack against Neural Network☆13Aug 9, 2019Updated 6 years ago
- A sybil-resilient distributed learning protocol.☆110Sep 9, 2025Updated 5 months ago
- Code for "On the Trade-off between Adversarial and Backdoor Robustness" (NIPS 2020)☆17Nov 11, 2020Updated 5 years ago
- Camouflage poisoning via machine unlearning☆19Jul 3, 2025Updated 7 months ago
- Source code for paper "How to Backdoor Federated Learning" (https://arxiv.org/abs/1807.00459)☆313Jul 25, 2024Updated last year
- CRFL: Certifiably Robust Federated Learning against Backdoor Attacks (ICML 2021)☆74Aug 5, 2021Updated 4 years ago
- ☆19Jun 21, 2021Updated 4 years ago
- The official code of KDD22 paper "FLDetecotor: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clien…☆85Feb 23, 2023Updated 2 years ago
- Adversarial attacks and defenses against federated learning.☆20May 24, 2023Updated 2 years ago
- Code for Data Poisoning Attacks Against Federated Learning Systems☆206Jun 13, 2021Updated 4 years ago
- Robust aggregation for federated learning with the RFA algorithm.☆53Sep 13, 2022Updated 3 years ago
- ☆21Oct 25, 2021Updated 4 years ago
- Code for "Analyzing Federated Learning through an Adversarial Lens" https://arxiv.org/abs/1811.12470☆152Oct 3, 2022Updated 3 years ago
- AdvDoor: Adversarial Backdoor Attack of Deep Learning System☆32Nov 5, 2024Updated last year
- TextHide: Tackling Data Privacy in Language Understanding Tasks☆31Apr 19, 2021Updated 4 years ago
- Backdoors Framework for Deep Learning and Federated Learning. A light-weight tool to conduct your research on backdoors.☆378Feb 5, 2023Updated 3 years ago
- The reproduction of the paper Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning.☆63Feb 2, 2023Updated 3 years ago
- ☆73Jun 7, 2022Updated 3 years ago
- Official implementation of "GRNN: Generative Regression Neural Network - A Data Leakage Attack for Federated Learning"☆33Feb 28, 2022Updated 3 years ago
- Differential priavcy based federated learning framework by various neural networks and svm using PyTorch.☆46Nov 28, 2022Updated 3 years ago
- Model Poisoning Attack to Federated Recommendation☆32Apr 23, 2022Updated 3 years ago
- [CVPR 2021] Scalability vs. Utility: Do We Have to Sacrifice One for the Other in Data Importance Quantification?☆33Dec 26, 2020Updated 5 years ago
- Methods for removing learned data from neural nets and evaluation of those methods☆38Nov 26, 2020Updated 5 years ago
- Research simulation toolkit for federated learning☆13Nov 7, 2020Updated 5 years ago
- ☆10Dec 8, 2018Updated 7 years ago
- ☆46Aug 4, 2023Updated 2 years ago
- anonymous github for SGSR: Beyond Social Homophily: Score-based Generative Diffusion Models for Social Recommendations☆12Sep 18, 2025Updated 4 months ago
- [ICLR2024] "Backdoor Federated Learning by Poisoning Backdoor-Critical Layers"☆52Dec 11, 2024Updated last year
- A study in multi-center imaging diagnostics, emphasizing on the modality of cardiovascular magnetic resonance and the prediction of hyper…☆11Jul 14, 2021Updated 4 years ago
- End-to-End Gradient Inversion (Gradient Leakage in Federated Learning) 【https://ieeexplore.ieee.org/document/9878027】☆12Aug 19, 2022Updated 3 years ago
- Federated principal component analysis (FPCA) is my master thesis, which aims to adapt PCA in a federated learning setting. The technique…☆11Apr 5, 2024Updated last year
- Target Agnostic Attack on Deep Models: Exploiting Security Vulnerabilities of Transfer Learning☆10Jul 2, 2019Updated 6 years ago
- This is the repository for the work "An ensemble mechanism to tackle the heterogeneity in asynchronous federated learning"☆11Nov 19, 2021Updated 4 years ago