mattbev / 6.867-final-projectLinks
Adversarial attacks and defenses against federated learning.
☆20Updated 2 years ago
Alternatives and similar repositories for 6.867-final-project
Users that are interested in 6.867-final-project are comparing it to the libraries listed below
Sorting:
- CRFL: Certifiably Robust Federated Learning against Backdoor Attacks (ICML 2021)☆74Updated 4 years ago
- This is a simple backdoor model for federated learning.We use MNIST as the original data set for data attack and we use CIFAR-10 data set…☆14Updated 5 years ago
- This is the code for our paper `Robust Federated Learning with Attack-Adaptive Aggregation' accepted by FTL-IJCAI'21.☆46Updated 2 years ago
- Official implementation of "FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective"…☆44Updated 4 years ago
- Official implementation of "Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective"☆57Updated 2 years ago
- ☆70Updated 3 years ago
- ☆38Updated 4 years ago
- The official code of KDD22 paper "FLDetecotor: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clien…☆84Updated 2 years ago
- Code for "Analyzing Federated Learning through an Adversarial Lens" https://arxiv.org/abs/1811.12470☆152Updated 3 years ago
- ☆30Updated 5 years ago
- ☆19Updated 5 years ago
- The code of the attack scheme in the paper "Backdoor Attack Against Split Neural Network-Based Vertical Federated Learning"☆21Updated 2 years ago
- Federated Learning and Membership Inference Attacks experiments on CIFAR10☆23Updated 5 years ago
- Membership Inference, Attribute Inference and Model Inversion attacks implemented using PyTorch.☆66Updated last year
- ☆54Updated 4 years ago
- Github Repo for AAAI 2023 paper: On the Vulnerability of Backdoor Defenses for Federated Learning☆41Updated 2 years ago
- Code for NDSS 2021 Paper "Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses Against Federated Learning"☆148Updated 3 years ago
- The code of AAAI-21 paper titled "Defending against Backdoors in Federated Learning with Robust Learning Rate".☆35Updated 3 years ago
- code for TPDS paper "Towards Fair and Privacy-Preserving Federated Deep Models"☆31Updated 3 years ago
- ☆24Updated 3 years ago
- PyTorch implementation of Security-Preserving Federated Learning via Byzantine-Sensitive Triplet Distance☆34Updated last year
- DBA: Distributed Backdoor Attacks against Federated Learning (ICLR 2020)☆202Updated 4 years ago
- ☆36Updated 3 years ago
- Official implementation of "FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective"…☆86Updated 5 years ago
- Official repository of the paper "Dynamic Defense Against Byzantine Poisoning Attacks in Federated Learning".☆12Updated 3 years ago
- The reproduction of the paper Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning.☆63Updated 2 years ago
- [ICLR 2023, Best Paper Award at ECCV’22 AROW Workshop] FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning☆60Updated last year
- ICML 2022 code for "Neurotoxin: Durable Backdoors in Federated Learning" https://arxiv.org/abs/2206.10341☆79Updated 2 years ago
- ☆55Updated 2 years ago
- Official code repository for our accepted work "Gradient Driven Rewards to Guarantee Fairness in Collaborative Machine Learning" in NeurI…☆24Updated last year