This is a simple backdoor model for federated learning.We use MNIST as the original data set for data attack and we use CIFAR-10 data set for backdoor model in the model attack.This is a brief paper recurrence for "How To Backdoor Federated Learning? "
☆14Jun 19, 2020Updated 5 years ago
Alternatives and similar repositories for Federated-Learning-Backdoor-Example-with-MNIST-and-CIFAR-10
Users that are interested in Federated-Learning-Backdoor-Example-with-MNIST-and-CIFAR-10 are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆38Apr 9, 2021Updated 5 years ago
- DBA: Distributed Backdoor Attacks against Federated Learning (ICLR 2020)☆205Aug 5, 2021Updated 4 years ago
- This is the documentation of the Tensorflow/Keras implementation of Latent Backdoor Attacks. Please see the paper for details Latent Back…☆23Sep 8, 2021Updated 4 years ago
- Source code of FedAttack.☆10Feb 9, 2022Updated 4 years ago
- ICML 2022 code for "Neurotoxin: Durable Backdoors in Federated Learning" https://arxiv.org/abs/2206.10341☆84Apr 1, 2023Updated 3 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- A sybil-resilient distributed learning protocol.☆112Sep 9, 2025Updated 7 months ago
- Source code for paper "How to Backdoor Federated Learning" (https://arxiv.org/abs/1807.00459)☆316Jul 25, 2024Updated last year
- [NeurIPS 2021] Source code for the paper "Qu-ANTI-zation: Exploiting Neural Network Quantization for Achieving Adversarial Outcomes"☆18Nov 9, 2021Updated 4 years ago
- Code for Data Poisoning Attacks Against Federated Learning Systems☆204Jun 13, 2021Updated 4 years ago
- CRFL: Certifiably Robust Federated Learning against Backdoor Attacks (ICML 2021)☆74Aug 5, 2021Updated 4 years ago
- The official code of KDD22 paper "FLDetecotor: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clien…☆87Feb 23, 2023Updated 3 years ago
- ☆73Jun 7, 2022Updated 3 years ago
- How Robust are Randomized Smoothing based Defenses to Data Poisoning? (CVPR 2021)☆14Jul 16, 2021Updated 4 years ago
- Code for "Analyzing Federated Learning through an Adversarial Lens" https://arxiv.org/abs/1811.12470☆153Oct 3, 2022Updated 3 years ago
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- Code for "On the Trade-off between Adversarial and Backdoor Robustness" (NIPS 2020)☆17Nov 11, 2020Updated 5 years ago
- Backdoors Framework for Deep Learning and Federated Learning. A light-weight tool to conduct your research on backdoors.☆381Feb 5, 2023Updated 3 years ago
- Robust aggregation for federated learning with the RFA algorithm.☆54Sep 13, 2022Updated 3 years ago
- ☆19Jun 21, 2021Updated 4 years ago
- Official implementation of "FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective"…☆43Oct 29, 2021Updated 4 years ago
- A paper summary of Backdoor Attack against Neural Network☆13Aug 9, 2019Updated 6 years ago
- Adversarial attacks and defenses against federated learning.☆20May 24, 2023Updated 2 years ago
- ☆12Sep 12, 2021Updated 4 years ago
- The reproduction of the paper Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning.☆63Feb 2, 2023Updated 3 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- AdvDoor: Adversarial Backdoor Attack of Deep Learning System☆32Nov 5, 2024Updated last year
- This repository contains PyTorch implementation of the paper ''LFighter: Defending against Label-flipping Attacks in Federated Learning''…☆19Mar 6, 2026Updated last month
- Robust Differentially Private Training of Deep Neural Networks☆12Dec 10, 2020Updated 5 years ago
- ☆12Dec 11, 2020Updated 5 years ago
- TextHide: Tackling Data Privacy in Language Understanding Tasks☆30Apr 19, 2021Updated 5 years ago
- This is the repository for the work "An ensemble mechanism to tackle the heterogeneity in asynchronous federated learning"☆10Nov 19, 2021Updated 4 years ago
- [CVPR 2021] Scalability vs. Utility: Do We Have to Sacrifice One for the Other in Data Importance Quantification?☆34Dec 26, 2020Updated 5 years ago
- Target Agnostic Attack on Deep Models: Exploiting Security Vulnerabilities of Transfer Learning☆10Jul 2, 2019Updated 6 years ago
- Input-aware Dynamic Backdoor Attack (NeurIPS 2020)☆39Jul 22, 2024Updated last year
- Open source password manager - Proton Pass • AdSecurely store, share, and autofill your credentials with Proton Pass, the end-to-end encrypted password manager trusted by millions.
- ☆21Oct 25, 2021Updated 4 years ago
- The code for the "Dynamic Backdoor Attacks Against Machine Learning Models" paper☆16Nov 20, 2023Updated 2 years ago
- ☆18Feb 2, 2022Updated 4 years ago
- 本项目演示联邦学习方法☆11Aug 1, 2019Updated 6 years ago
- Official implementation of "GRNN: Generative Regression Neural Network - A Data Leakage Attack for Federated Learning"☆33Feb 28, 2022Updated 4 years ago
- Differential priavcy based federated learning framework by various neural networks and svm using PyTorch.☆47Nov 28, 2022Updated 3 years ago
- Backdoor detection in Federated learning with similarity measurement☆26Apr 30, 2022Updated 4 years ago