FedDefender is a novel defense mechanism designed to safeguard Federated Learning from the poisoning attacks (i.e., backdoor attacks).
☆16Jul 6, 2024Updated last year
Alternatives and similar repositories for FedDefender
Users that are interested in FedDefender are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- TraceFL is a novel mechanism for Federated Learning that achieves interpretability by tracking neuron provenance. It identifies clients r…☆10Nov 12, 2024Updated last year
- Official implementation for paper "FedGame: A Game-Theoretic Defense against Backdoor Attacks in Federated Learning" (NeurIPS 2023).☆13Oct 25, 2024Updated last year
- A backdoor defense for federated learning via isolated subspace training (NeurIPS2023)☆31Jan 1, 2024Updated 2 years ago
- Official repository of the paper "Dynamic Defense Against Byzantine Poisoning Attacks in Federated Learning".☆12Mar 28, 2022Updated 4 years ago
- [ICLR 2023, Best Paper Award at ECCV’22 AROW Workshop] FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning☆59Dec 11, 2024Updated last year
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- A PyTorch based repository for Federate Learning with Differential Privacy☆19Mar 3, 2023Updated 3 years ago
- ☆12Jan 28, 2023Updated 3 years ago
- Official implementation of Resisting Backdoor Attacks in Federated Learning via Bidirectional Elections and Individual Perspective☆13Sep 4, 2024Updated last year
- Implementation of BapFL: You can Backdoor Attack Personalized Federated Learning☆15Sep 18, 2023Updated 2 years ago
- reproduce the FLTrust model based on the paper "FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping"☆36Dec 4, 2022Updated 3 years ago
- Implementation of a client reputation, gradient checking and homomorphic encryption mechanism to defend a federated learning system from …☆17Jan 11, 2024Updated 2 years ago
- TextGuard: Provable Defense against Backdoor Attacks on Text Classification☆15Nov 7, 2023Updated 2 years ago
- ☆73Jun 7, 2022Updated 3 years ago
- Input Purification Defense Against Trojan Attacks on Deep Neural Network Systems☆28Apr 1, 2021Updated 5 years ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- The source code of the paper "Efficient Privacy-Preserving Federated Learning with Compressed Sensing"☆22May 23, 2024Updated last year
- Official implementation of "FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective"…☆43Oct 29, 2021Updated 4 years ago
- Adversarial attacks and defenses against federated learning.