An implementation for the paper "A Little Is Enough: Circumventing Defenses For Distributed Learning" (NeurIPS 2019)
☆29Jun 29, 2023Updated 2 years ago
Alternatives and similar repositories for attacking_distributed_learning
Users that are interested in attacking_distributed_learning are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- DETOX: A Redundancy-based Framework for Faster and More Robust Gradient Aggregation☆16Jul 13, 2020Updated 5 years ago
- Code for "Analyzing Federated Learning through an Adversarial Lens" https://arxiv.org/abs/1811.12470☆153Oct 3, 2022Updated 3 years ago
- ☆38Apr 9, 2021Updated 5 years ago
- Distributed Momentum for Byzantine-resilient Stochastic Gradient Descent (ICLR 2021)☆21May 6, 2021Updated 4 years ago
- A sybil-resilient distributed learning protocol.☆112Sep 9, 2025Updated 7 months ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- 基于《A Little Is Enough: Circumventing Defenses For Distributed Learning》的联邦学习攻击模型☆65May 22, 2020Updated 5 years ago
- ☆31Apr 8, 2020Updated 6 years ago
- Associated codebase for Byzantine-resilient distributed / decentralized machine learning papers from INSPIRE Lab☆15Oct 11, 2021Updated 4 years ago
- ☆19Apr 12, 2023Updated 2 years ago
- ☆14Mar 9, 2025Updated last year
- Code for USENIX Security 2023 Paper "Every Vote Counts: Ranking-Based Training of Federated Learning to Resist Poisoning Attacks"☆21May 19, 2024Updated last year
- Code for NDSS 2021 Paper "Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses Against Federated Learning"☆151Aug 6, 2022Updated 3 years ago
- ☆73Jun 7, 2022Updated 3 years ago
- The code of the attack scheme in the paper "Backdoor Attack Against Split Neural Network-Based Vertical Federated Learning"☆21Oct 13, 2023Updated 2 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- ☆56Feb 19, 2023Updated 3 years ago
- Github Repo for AAAI 2023 paper: On the Vulnerability of Backdoor Defenses for Federated Learning☆41Apr 3, 2023Updated 3 years ago
- Pytorch implementation of GEE: A Gradient-based Explainable Variational Autoencoder for Network Anomaly Detection☆29Jul 7, 2022Updated 3 years ago
- CoPur: Certifiably Robust Collaborative Inference via Feature Purification (NeurIPS 2022)☆11Dec 7, 2022Updated 3 years ago
- ☆20Oct 28, 2025Updated 5 months ago
- ICML 2022 code for "Neurotoxin: Durable Backdoors in Federated Learning" https://arxiv.org/abs/2206.10341☆83Apr 1, 2023Updated 3 years ago
- Eluding Secure Aggregation in Federated Learning via Model Inconsistency☆13Mar 10, 2023Updated 3 years ago
- ☆19Nov 17, 2023Updated 2 years ago
- Official Implementation of "Lurking in the shadows: Unveiling Stealthy Backdoor Attacks against Personalized Federated Learning"☆12Feb 10, 2025Updated last year
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Source code for paper "How to Backdoor Federated Learning" (https://arxiv.org/abs/1807.00459)☆314Jul 25, 2024Updated last year
- ☆26Jan 25, 2019Updated 7 years ago
- ☆10Nov 1, 2023Updated 2 years ago
- ☆20Feb 25, 2024Updated 2 years ago
- STABILIZING GRADIENTS FOR DEEP NEURAL NETWORKS VIA EFFICIENT SVD PARAMETERIZATION☆16Jun 5, 2018Updated 7 years ago
- Implementation of Neural Machine Translation by jointly learning to align and translate☆25Feb 11, 2018Updated 8 years ago
- ⚔️ Blades: A Unified Benchmark Suite for Attacks and Defenses in Federated Learning☆156Feb 16, 2025Updated last year
- The implementatioin code of paper: “A Practical Clean-Label Backdoor Attack with Limited Information in Vertical Federated Learning”☆11Jul 1, 2023Updated 2 years ago
- The official code of KDD22 paper "FLDetecotor: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clien…☆86Feb 23, 2023Updated 3 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- [TNNLS] Knowledge Graphs and Pre-trained Language Models enhanced Representation Learning for Conversational Recommender Systems☆14Sep 25, 2025Updated 6 months ago
- ☆13Sep 12, 2021Updated 4 years ago
- DBA: Distributed Backdoor Attacks against Federated Learning (ICLR 2020)☆204Aug 5, 2021Updated 4 years ago
- ☆26Dec 14, 2021Updated 4 years ago
- KNN Defense Against Clean Label Poisoning Attacks☆13Sep 24, 2021Updated 4 years ago
- Implementation of BapFL: You can Backdoor Attack Personalized Federated Learning☆15Sep 18, 2023Updated 2 years ago
- 这是本人使用python搭建的区块链demo,仅包含网络层,用于学习用途☆12May 11, 2018Updated 7 years ago