This is a simple backdoor model for federated learning.We use MNIST as the original data set for data attack and we use CIFAR-10 data set for backdoor model in the model attack.This is a brief paper recurrence for "How To Backdoor Federated Learning? "
☆14Jun 19, 2020Updated 5 years ago
Alternatives and similar repositories for Federated-Learning-Backdoor-Example-with-MNIST-and-CIFAR-10
Users that are interested in Federated-Learning-Backdoor-Example-with-MNIST-and-CIFAR-10 are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆38Apr 9, 2021Updated 5 years ago
- DBA: Distributed Backdoor Attacks against Federated Learning (ICLR 2020)☆204Aug 5, 2021Updated 4 years ago
- This is the documentation of the Tensorflow/Keras implementation of Latent Backdoor Attacks. Please see the paper for details Latent Back…☆23Sep 8, 2021Updated 4 years ago
- Source code of FedAttack.☆11Feb 9, 2022Updated 4 years ago
- ICML 2022 code for "Neurotoxin: Durable Backdoors in Federated Learning" https://arxiv.org/abs/2206.10341☆83Apr 1, 2023Updated 3 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- A sybil-resilient distributed learning protocol.☆112Sep 9, 2025Updated 7 months ago
- Source code for paper "How to Backdoor Federated Learning" (https://arxiv.org/abs/1807.00459)☆314Jul 25, 2024Updated last year
- Code for Data Poisoning Attacks Against Federated Learning Systems☆206Jun 13, 2021Updated 4 years ago
- CRFL: Certifiably Robust Federated Learning against Backdoor Attacks (ICML 2021)☆74Aug 5, 2021Updated 4 years ago
- The official code of KDD22 paper "FLDetecotor: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clien…☆86Feb 23, 2023Updated 3 years ago
- ☆73Jun 7, 2022Updated 3 years ago
- How Robust are Randomized Smoothing based Defenses to Data Poisoning? (CVPR 2021)☆14Jul 16, 2021Updated 4 years ago
- Code for "Analyzing Federated Learning through an Adversarial Lens" https://arxiv.org/abs/1811.12470☆153Oct 3, 2022Updated 3 years ago
- Code for "On the Trade-off between Adversarial and Backdoor Robustness" (NIPS 2020)☆17Nov 11, 2020Updated 5 years ago
- Wordpress hosting with auto-scaling on Cloudways • AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- Backdoors Framework for Deep Learning and Federated Learning. A light-weight tool to conduct your research on backdoors.☆379Feb 5, 2023Updated 3 years ago
- Robust aggregation for federated learning with the RFA algorithm.☆54Sep 13, 2022Updated 3 years ago
- Official implementation of "FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective"…☆44Oct 29, 2021Updated 4 years ago
- A paper summary of Backdoor Attack against Neural Network☆13Aug 9, 2019Updated 6 years ago
- Adversarial attacks and defenses against federated learning.☆20May 24, 2023Updated 2 years ago
- Distillation Self-Knowledge From Contrastive Links to Classify Graph Nodes Without Passing Messages.☆15Jun 17, 2021Updated 4 years ago
- ☆13Sep 12, 2021Updated 4 years ago
- The reproduction of the paper Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning.☆63Feb 2, 2023Updated 3 years ago
- AdvDoor: Adversarial Backdoor Attack of Deep Learning System☆32Nov 5, 2024Updated last year
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- This repository contains PyTorch implementation of the paper ''LFighter: Defending against Label-flipping Attacks in Federated Learning''…☆19Mar 6, 2026Updated last month
- ☆11Dec 12, 2020Updated 5 years ago
- Camouflage poisoning via machine unlearning☆19Jul 3, 2025Updated 9 months ago
- Robust Differentially Private Training of Deep Neural Networks☆12Dec 10, 2020Updated 5 years ago
- ☆12Dec 11, 2020Updated 5 years ago
- TextHide: Tackling Data Privacy in Language Understanding Tasks☆30Apr 19, 2021Updated 4 years ago
- This is the repository for the work "An ensemble mechanism to tackle the heterogeneity in asynchronous federated learning"☆11Nov 19, 2021Updated 4 years ago
- [CVPR 2021] Scalability vs. Utility: Do We Have to Sacrifice One for the Other in Data Importance Quantification?☆34Dec 26, 2020Updated 5 years ago
- Target Agnostic Attack on Deep Models: Exploiting Security Vulnerabilities of Transfer Learning☆10Jul 2, 2019Updated 6 years ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- Input-aware Dynamic Backdoor Attack (NeurIPS 2020)☆39Jul 22, 2024Updated last year
- ☆21Oct 25, 2021Updated 4 years ago
- The code for the "Dynamic Backdoor Attacks Against Machine Learning Models" paper☆16Nov 20, 2023Updated 2 years ago
- ☆18Feb 2, 2022Updated 4 years ago
- 本项目演示联邦学习方法☆11Aug 1, 2019Updated 6 years ago
- Official implementation of "GRNN: Generative Regression Neural Network - A Data Leakage Attack for Federated Learning"☆33Feb 28, 2022Updated 4 years ago
- Differential priavcy based federated learning framework by various neural networks and svm using PyTorch.☆47Nov 28, 2022Updated 3 years ago