xiangyue9607 / DP-Forward
☆20Updated 7 months ago
Related projects ⓘ
Alternatives and complementary repositories for DP-Forward
- [CCS 2021] "DataLens: Scalable Privacy Preserving Training via Gradient Compression and Aggregation" by Boxin Wang*, Fan Wu*, Yunhui Long…☆37Updated 2 years ago
- Federated Learning Framework Benchmark (UniFed)☆47Updated last year
- ☆53Updated last year
- ☆13Updated last year
- ☆65Updated 2 years ago
- Privacy attacks on Split Learning☆37Updated 3 years ago
- THU-AIR Vertical Federated Learning general, extensible and light-weight framework☆84Updated 4 months ago
- Official code repository for our accepted work "Gradient Driven Rewards to Guarantee Fairness in Collaborative Machine Learning" in NeurI…☆22Updated last month
- Code Repo for paper Label Leakage and Protection in Two-party Split Learning (ICLR 2022).☆23Updated 2 years ago
- This repo implements several algorithms for learning with differential privacy.☆102Updated last year
- ☆23Updated 2 years ago
- ☆58Updated last year
- Code for ML Doctor☆86Updated 3 months ago
- Official implementation of "Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective"☆55Updated last year
- Official repo for the paper: Recovering Private Text in Federated Learning of Language Models (in NeurIPS 2022)☆57Updated last year
- The code of AAAI-21 paper titled "Defending against Backdoors in Federated Learning with Robust Learning Rate".☆30Updated 2 years ago
- Code for Exploiting Unintended Feature Leakage in Collaborative Learning (in Oakland 2019)☆53Updated 5 years ago
- The official code of KDD22 paper "FLDetecotor: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clien…☆74Updated last year
- Federated Learning with Partial Model Personalization☆42Updated 2 years ago
- ☆38Updated 3 years ago
- Private Adaptive Optimization with Side Information (ICML '22)☆16Updated 2 years ago
- ICML 2022 code for "Neurotoxin: Durable Backdoors in Federated Learning" https://arxiv.org/abs/2206.10341☆63Updated last year
- ☆36Updated last year
- 基于《A Little Is Enough: Circumventing Defenses For Distributed Learning》的联邦学习攻击模型☆61Updated 4 years ago
- ☆11Updated last year
- ☆27Updated last year
- ☆40Updated last year
- ☆45Updated 5 years ago
- Learning from history for Byzantine Robustness☆21Updated 3 years ago
- FLTracer: Accurate Poisoning Attack Provenance in Federated Learning☆16Updated 5 months ago