An implementation for the paper "A Little Is Enough: Circumventing Defenses For Distributed Learning" (NeurIPS 2019)
☆28Jun 29, 2023Updated 2 years ago
Alternatives and similar repositories for attacking_distributed_learning
Users that are interested in attacking_distributed_learning are comparing it to the libraries listed below
Sorting:
- Code for "Analyzing Federated Learning through an Adversarial Lens" https://arxiv.org/abs/1811.12470☆152Oct 3, 2022Updated 3 years ago
- Distributed Momentum for Byzantine-resilient Stochastic Gradient Descent (ICLR 2021)☆21May 6, 2021Updated 4 years ago
- ☆38Apr 9, 2021Updated 4 years ago
- ☆14Mar 9, 2025Updated 11 months ago
- 基于《A Little Is Enough: Circumventing Defenses For Distributed Learning》的联邦学习攻击模型☆65May 22, 2020Updated 5 years ago
- Associated codebase for Byzantine-resilient distributed / decentralized machine learning papers from INSPIRE Lab☆15Oct 11, 2021Updated 4 years ago
- ☆31Apr 8, 2020Updated 5 years ago
- ☆20Oct 28, 2025Updated 4 months ago
- Eluding Secure Aggregation in Federated Learning via Model Inconsistency☆13Mar 10, 2023Updated 2 years ago
- ☆19Nov 17, 2023Updated 2 years ago
- Code for NDSS 2021 Paper "Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses Against Federated Learning"☆148Aug 6, 2022Updated 3 years ago
- Github Repo for AAAI 2023 paper: On the Vulnerability of Backdoor Defenses for Federated Learning☆41Apr 3, 2023Updated 2 years ago
- ☆20Feb 25, 2024Updated 2 years ago
- ☆16Apr 12, 2023Updated 2 years ago
- ☆73Jun 7, 2022Updated 3 years ago
- Byzantine-resilient distributed SGD with TensorFlow.☆40Jan 22, 2021Updated 5 years ago
- Code for USENIX Security 2023 Paper "Every Vote Counts: Ranking-Based Training of Federated Learning to Resist Poisoning Attacks"☆21May 19, 2024Updated last year
- The code of the attack scheme in the paper "Backdoor Attack Against Split Neural Network-Based Vertical Federated Learning"☆21Oct 13, 2023Updated 2 years ago
- ICML 2022 code for "Neurotoxin: Durable Backdoors in Federated Learning" https://arxiv.org/abs/2206.10341☆83Apr 1, 2023Updated 2 years ago
- Getting Starting with NIMBUS-CORE☆10Dec 16, 2023Updated 2 years ago
- ⚔️ Blades: A Unified Benchmark Suite for Attacks and Defenses in Federated Learning☆156Feb 16, 2025Updated last year
- Code and dataset for the paper: "Can Editing LLMs Inject Harm?"☆21Dec 26, 2025Updated 2 months ago
- ☆55Feb 19, 2023Updated 3 years ago
- ☆70Feb 16, 2025Updated last year
- ☆26Dec 14, 2021Updated 4 years ago
- Source code for paper "How to Backdoor Federated Learning" (https://arxiv.org/abs/1807.00459)☆314Jul 25, 2024Updated last year
- Profit Allocation for Federated Learning☆24Apr 27, 2020Updated 5 years ago
- ☆26Jan 25, 2019Updated 7 years ago
- ☆25Nov 13, 2025Updated 3 months ago
- Machine Learning & Security Seminar @Purdue University☆25May 9, 2023Updated 2 years ago
- ☆11Dec 23, 2024Updated last year
- Implementation of Neural Machine Translation by jointly learning to align and translate☆25Feb 11, 2018Updated 8 years ago
- ☆37Oct 17, 2024Updated last year
- DBA: Distributed Backdoor Attacks against Federated Learning (ICLR 2020)☆202Aug 5, 2021Updated 4 years ago
- ☆12May 6, 2022Updated 3 years ago
- CRFL: Certifiably Robust Federated Learning against Backdoor Attacks (ICML 2021)☆74Aug 5, 2021Updated 4 years ago
- The code of AAAI-21 paper titled "Defending against Backdoors in Federated Learning with Robust Learning Rate".☆35Oct 3, 2022Updated 3 years ago
- Official implementation for paper: A New Defense Against Adversarial Images: Turning a Weakness into a Strength☆38Feb 15, 2020Updated 6 years ago
- IBA: Towards Irreversible Backdoor Attacks in Federated Learning (Poster at NeurIPS 2023)☆40Sep 10, 2025Updated 5 months ago