Soptq / Dynamic_Load_Balance_DistributedDNNLinks
Official Pytorch implementation of "DBS: Dynamic Batch Size for Distributed Deep Neural Network Training"
☆24Updated 4 years ago
Alternatives and similar repositories for Dynamic_Load_Balance_DistributedDNN
Users that are interested in Dynamic_Load_Balance_DistributedDNN are comparing it to the libraries listed below
Sorting:
- ☆46Updated 5 years ago
- Partial implementation of paper "DEEP GRADIENT COMPRESSION: REDUCING THE COMMUNICATION BANDWIDTH FOR DISTRIBUTED TRAINING"☆31Updated 5 years ago
- FedNAS: Federated Deep Learning via Neural Architecture Search☆54Updated 4 years ago
- Practical low-rank gradient compression for distributed optimization: https://arxiv.org/abs/1905.13727☆149Updated last year
- PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models. ICML 2021☆55Updated 4 years ago
- GRACE - GRAdient ComprEssion for distributed deep learning☆139Updated last year
- Implementation of (overlap) local SGD in Pytorch☆34Updated 5 years ago
- Accuracy 77%. Large batch deep learning optimizer LARS for ImageNet with PyTorch and ResNet, using Horovod for distribution. Optional acc…☆38Updated 4 years ago
- A Sparse-tensor Communication Framework for Distributed Deep Learning☆13Updated 4 years ago
- ddl-benchmarks: Benchmarks for Distributed Deep Learning☆36Updated 5 years ago
- Distilling Knowledge via Intermediate Classifiers☆16Updated 4 years ago
- [ICLR 2018] Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training☆226Updated last year
- Distributed ML Training Benchmarks☆27Updated 2 years ago
- [NeurIPS‘2021] "MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge", Geng Yuan, Xiaolong Ma, Yanzhi Wang et al…☆18Updated 3 years ago
- Implementation of Parameter Server using PyTorch communication lib☆43Updated 6 years ago
- [ICML 2021] "Do We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse Training" by Shiwei Liu, Lu Yin, De…☆45Updated 2 years ago
- sensAI: ConvNets Decomposition via Class Parallelism for Fast Inference on Live Data☆65Updated last year
- Official PyTorch Implementation of HELP: Hardware-adaptive Efficient Latency Prediction for NAS via Meta-Learning (NeurIPS 2021 Spotlight…☆63Updated last year
- Codes for paper "Few Shot Network Compression via Cross Distillation", AAAI 2020.☆31Updated 5 years ago
- Code for "Adaptive Gradient Quantization for Data-Parallel SGD", published in NeurIPS 2020.☆30Updated 4 years ago
- [ICDCS 2023] Evaluation and Optimization of Gradient Compression for Distributed Deep Learning☆10Updated 2 years ago
- ☆22Updated 4 years ago
- Code for Double Blind CollaborativeLearning (DBCL)☆14Updated 4 years ago
- A Comprehensive and Versatile Open-Source Federated Learning Framework☆33Updated 2 years ago
- [IJCAI'22 Survey] Recent Advances on Neural Network Pruning at Initialization.☆59Updated 2 years ago
- Code for reproducing experiments performed for Accoridon☆13Updated 4 years ago
- Sparsified SGD with Memory: https://arxiv.org/abs/1809.07599☆58Updated 7 years ago
- Code for "AutoMTL: A Programming Framework for Automating Efficient Multi-Task Learning"☆52Updated 2 years ago
- Decentralized SGD and Consensus with Communication Compression: https://arxiv.org/abs/1907.09356☆74Updated 5 years ago
- [ICLR 2021] CompOFA: Compound Once-For-All Networks For Faster Multi-Platform Deployment☆25Updated 3 years ago