Adversarial attacks and defenses against federated learning.
☆20May 24, 2023Updated 2 years ago
Alternatives and similar repositories for 6.867-final-project
Users that are interested in 6.867-final-project are comparing it to the libraries listed below
Sorting:
- KNN Defense Against Clean Label Poisoning Attacks☆13Sep 24, 2021Updated 4 years ago
- Official repository of the paper "Dynamic Defense Against Byzantine Poisoning Attacks in Federated Learning".☆12Mar 28, 2022Updated 3 years ago
- ☆31Apr 8, 2020Updated 5 years ago
- Simulation code for Federated Learning with Over-the-Air Computation.☆11Sep 11, 2020Updated 5 years ago
- 💉🔐 Novel algorithm for defending against Data Poisoning Attacks in a Federated Learning scenario☆25Apr 22, 2024Updated last year
- ⚔️ Blades: A Unified Benchmark Suite for Attacks and Defenses in Federated Learning☆156Feb 16, 2025Updated last year
- ☆12Jan 28, 2023Updated 3 years ago
- ☆55Feb 19, 2023Updated 3 years ago
- ☆17Jun 25, 2024Updated last year
- Code for the paper "Deep Partition Aggregation: Provable Defenses against General Poisoning Attacks"☆13Aug 22, 2022Updated 3 years ago
- FedDefender is a novel defense mechanism designed to safeguard Federated Learning from the poisoning attacks (i.e., backdoor attacks).☆15Jul 6, 2024Updated last year
- ☆14Jul 11, 2023Updated 2 years ago
- Gradient-Leakage Resilient Federated Learning☆14Jul 25, 2022Updated 3 years ago
- ☆73Jun 7, 2022Updated 3 years ago
- This is a simple backdoor model for federated learning.We use MNIST as the original data set for data attack and we use CIFAR-10 data set…☆14Jun 19, 2020Updated 5 years ago
- Federated Block Coordinate Descent (FedBCD) code for "Federated Block Coordinate Descent Scheme for Learning Global and Personalized Mode…☆16Dec 27, 2020Updated 5 years ago
- CRFL: Certifiably Robust Federated Learning against Backdoor Attacks (ICML 2021)☆74Aug 5, 2021Updated 4 years ago
- Code Repo for paper Label Leakage and Protection in Two-party Split Learning (ICLR 2022).☆22Mar 12, 2022Updated 3 years ago
- ☆19Jan 8, 2021Updated 5 years ago
- ☆21Oct 25, 2021Updated 4 years ago
- Code for "Analyzing Federated Learning through an Adversarial Lens" https://arxiv.org/abs/1811.12470☆152Oct 3, 2022Updated 3 years ago
- DeceFL: A Principled Decentralized Federated Learning Framework☆29Jan 25, 2023Updated 3 years ago
- ☆28Mar 24, 2023Updated 2 years ago
- A sybil-resilient distributed learning protocol.☆112Sep 9, 2025Updated 5 months ago
- Principles of ML Systems Project☆26Feb 3, 2021Updated 5 years ago
- Source code for paper "Federated Edge Learning with Misaligned Over-The-Air Computation"☆29Jan 24, 2023Updated 3 years ago
- code for TPDS paper "Towards Fair and Privacy-Preserving Federated Deep Models"☆32Jun 16, 2022Updated 3 years ago
- [ICLR2023] Distilling Cognitive Backdoor Patterns within an Image☆36Oct 29, 2025Updated 4 months ago
- Model Poisoning Attack to Federated Recommendation☆32Apr 23, 2022Updated 3 years ago
- ☆28Dec 31, 2020Updated 5 years ago
- Official code for "Throughput-Optimal Topology Design for Cross-Silo Federated Learning" (NeurIPS'20)☆32Oct 10, 2022Updated 3 years ago
- Dopamine: Differentially Private Federated Learning on Medical Data (AAAI - PPAI)☆76Feb 9, 2025Updated last year
- Breaching privacy in federated learning scenarios for vision and text☆313Jan 24, 2026Updated last month
- G-NIA model from "Single Node Injection Attack against Graph Neural Networks" (CIKM 2021)☆29Jan 11, 2022Updated 4 years ago
- Simplified implementation of federated learning in PyTorch☆32Jan 7, 2021Updated 5 years ago
- ICML 2022 code for "Neurotoxin: Durable Backdoors in Federated Learning" https://arxiv.org/abs/2206.10341☆83Apr 1, 2023Updated 2 years ago
- The code for "Improved Deep Leakage from Gradients" (iDLG).☆166Mar 4, 2021Updated 5 years ago
- [CVPRW'22] A privacy attack that exploits Adversarial Training models to compromise the privacy of Federated Learning systems.☆12Jul 7, 2022Updated 3 years ago
- ☆10Apr 27, 2021Updated 4 years ago