AI-secure / CoPurLinks
CoPur: Certifiably Robust Collaborative Inference via Feature Purification (NeurIPS 2022)
☆10Updated 2 years ago
Alternatives and similar repositories for CoPur
Users that are interested in CoPur are comparing it to the libraries listed below
Sorting:
- CRFL: Certifiably Robust Federated Learning against Backdoor Attacks (ICML 2021)☆72Updated 3 years ago
- ☆70Updated 3 years ago
- Code Repo for paper Label Leakage and Protection in Two-party Split Learning (ICLR 2022).☆23Updated 3 years ago
- ☆55Updated 2 years ago
- ☆38Updated 4 years ago
- A Fine-grained Differentially Private Federated Learning against Leakage from Gradients☆14Updated 2 years ago
- Official implementation of "Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective"☆56Updated 2 years ago
- A pytorch implementation of the paper "Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage".☆58Updated 2 years ago
- Official code repository for our accepted work "Gradient Driven Rewards to Guarantee Fairness in Collaborative Machine Learning" in NeurI…☆23Updated 8 months ago
- ☆30Updated 5 years ago
- Official Repository for ResSFL (accepted by CVPR '22)☆21Updated 3 years ago
- The code of AAAI-21 paper titled "Defending against Backdoors in Federated Learning with Robust Learning Rate".☆34Updated 2 years ago
- ☆20Updated 3 years ago
- The implementatioin code of paper: “A Practical Clean-Label Backdoor Attack with Limited Information in Vertical Federated Learning”☆11Updated last year
- [ICLR 2023, Best Paper Award at ECCV’22 AROW Workshop] FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning☆56Updated 6 months ago
- ☆21Updated 3 years ago
- Multi-metrics adaptively identifies backdoors in Federated learning☆27Updated last week
- [ICLR2024] "Backdoor Federated Learning by Poisoning Backdoor-Critical Layers"☆38Updated 6 months ago
- FLTracer: Accurate Poisoning Attack Provenance in Federated Learning☆22Updated last year
- 基于《A Little Is Enough: Circumventing Defenses For Distributed Learning》的联邦学习攻击模型☆63Updated 5 years ago
- Official implementation of our work "Collaborative Fairness in Federated Learning."☆53Updated last year
- A coupled vertical federated learning framework that boosts the model performance with record similarities (NeurIPS 2022)☆27Updated 2 years ago
- RAB: Provable Robustness Against Backdoor Attacks☆39Updated last year
- Official code for "Personalized Federated Learning through Local Memorization" (ICML'22)☆42Updated 2 years ago
- ☆26Updated last year
- ☆25Updated 3 years ago
- The code of the attack scheme in the paper "Backdoor Attack Against Split Neural Network-Based Vertical Federated Learning"☆19Updated last year
- ICML 2022 code for "Neurotoxin: Durable Backdoors in Federated Learning" https://arxiv.org/abs/2206.10341☆73Updated 2 years ago
- [ICLR2023] Towards Understanding and Mitigating Dimensional Collapse in Heterogeneous Federated Learning (https://arxiv.org/abs/2210.0022…☆40Updated 2 years ago
- ☆15Updated last year