HKBU-HPML / MG-WFBP
MG-WFBP: Merging Gradients Wisely for Efficient Communication in Distributed Deep Learning
☆12Updated 3 years ago
Related projects: ⓘ
- Layer-wise Sparsification of Distributed Deep Learning☆10Updated 4 years ago
- gTop-k S-SGD: A Communication-Efficient Distributed Synchronous SGD Algorithm for Deep Learning☆35Updated 5 years ago
- Analyze network performance in distributed training☆16Updated 3 years ago
- Code for "Solving Large-Scale Granular Resource Allocation Problems Efficiently with POP", which appeared at SOSP 2021☆24Updated 2 years ago
- ddl-benchmarks: Benchmarks for Distributed Deep Learning☆37Updated 4 years ago
- Hi-Speed DNN Training with Espresso: Unleashing the Full Potential of Gradient Compression with Near-Optimal Usage Strategies (EuroSys '2…☆12Updated last year
- BytePS examples (Vision, NLP, GAN, etc)☆19Updated last year
- Helios Traces from SenseTime☆47Updated last year
- ☆37Updated 3 years ago
- THC: Accelerating Distributed Deep Learning Using Tensor Homomorphic Compression☆12Updated last month
- ☆21Updated last year
- PipeSwitch: Fast Pipelined Context Switching for Deep Learning Applications☆124Updated 2 years ago
- GRACE - GRAdient ComprEssion for distributed deep learning☆134Updated last month
- Artifacts for our SIGCOMM'22 paper Muri☆38Updated 8 months ago
- Implementation of Parameter Server using PyTorch communication lib☆42Updated 5 years ago
- ☆23Updated 2 months ago
- Tiresias is a GPU cluster manager for distributed deep learning training.☆148Updated 4 years ago
- A Deep Learning Cluster Scheduler☆36Updated 3 years ago
- An Efficient Dynamic Resource Scheduler for Deep Learning Clusters☆41Updated 6 years ago
- Machine Learning System☆14Updated 4 years ago
- ☆20Updated 5 years ago
- Understanding Top-k Sparsification in Distributed Deep Learning☆22Updated 4 years ago
- QSGD-TF☆21Updated 5 years ago
- A computation-parallel deep learning architecture.☆12Updated 4 years ago
- Dual-way gradient sparsification approach for async DNN training, based on PyTorch.☆11Updated last year
- A Cluster-Wide Model Manager to Accelerate DNN Training via Automated Training Warmup☆31Updated last year
- This is the Group-Meeting collections of HKUST System NetworkING (SING) Research Group.☆28Updated 4 years ago
- Code for "Heterogenity-Aware Cluster Scheduling Policies for Deep Learning Workloads", which appeared at OSDI 2020☆124Updated last month
- Model-less Inference Serving☆78Updated 10 months ago
- ☆67Updated last year