KarhouTam / pFedLALinks
PyTorch implementation of Layer-wised Model Aggregation for Personalized Federated Learning
☆92Updated 2 years ago
Alternatives and similar repositories for pFedLA
Users that are interested in pFedLA are comparing it to the libraries listed below
Sorting:
- AAAI 2023 accepted paper, FedALA: Adaptive Local Aggregation for Personalized Federated Learning☆143Updated last year
- [AAAI'22] FedProto: Federated Prototype Learning across Heterogeneous Clients☆166Updated 3 years ago
- PyTorch implementation of SCAFFOLD (Stochastic Controlled Averaging for Federated Learning, ICML 2020).☆70Updated 3 years ago
- An implementation for "Federated Learning with Non-IID Data via Local Drift Decoupling and Correction"☆88Updated 3 years ago
- ☆182Updated 2 years ago
- Implementation of SCAFFOLD: Stochastic Controlled Averaging for Federated Learning☆73Updated 2 years ago
- Model-Contrastive Federated Learning (CVPR 2021)☆292Updated 3 years ago
- Code Implementation and Informations about FedAS☆33Updated 10 months ago
- (NeurIPS 2022) Official Implementation of "Preservation of the Global Knowledge by Not-True Distillation in Federated Learning"☆89Updated 2 years ago
- Simplified Implementation of FedPAC☆60Updated 2 years ago
- A PyTorch implementation of "Communication-Efficient Learning of Deep Networks from Decentralized Data", AISTATS, 2017☆89Updated last year
- ☆75Updated 5 years ago
- Heterogeneous Federated Learning: State-of-the-art and Research Challenges☆166Updated 11 months ago
- ☆82Updated 3 years ago
- PyTorch implementation of FedPer (Federated Learning with Personalization Layers).☆78Updated 3 years ago
- Source code for "FedSoft: Soft Clustered Federated Learning with Proximal Local Updating"☆18Updated 3 years ago
- The implementation of "Personalized Edge Intelligence via Federated Self- Knowledge Distillation".☆30Updated 3 years ago
- PyTorch implementation of FedProx (Federated Optimization for Heterogeneous Networks, MLSys 2020).☆112Updated 3 years ago
- Official code for "Federated Multi-Task Learning under a Mixture of Distributions" (NeurIPS'21)☆165Updated 3 years ago
- You only need to configure one file to support model heterogeneity. Consistent GPU memory usage for single or multiple clients.