zhuangwang93 / Espresso
Hi-Speed DNN Training with Espresso: Unleashing the Full Potential of Gradient Compression with Near-Optimal Usage Strategies (EuroSys '23)
☆12Updated 11 months ago
Related projects: ⓘ
- Cupcake: A Compression Scheduler for Scalable Communication-Efficient Distributed Training (MLSys '23)☆8Updated last year
- THC: Accelerating Distributed Deep Learning Using Tensor Homomorphic Compression☆12Updated last month
- A Cluster-Wide Model Manager to Accelerate DNN Training via Automated Training Warmup☆31Updated last year
- Artifacts for our SIGCOMM'22 paper Muri☆38Updated 8 months ago
- Ok-Topk is a scheme for distributed training with sparse gradients. Ok-Topk integrates a novel sparse allreduce algorithm (less than 6k c…☆23Updated last year