zhuangwang93 / Cupcake
Cupcake: A Compression Scheduler for Scalable Communication-Efficient Distributed Training (MLSys '23)
☆9Updated last year
Alternatives and similar repositories for Cupcake:
Users that are interested in Cupcake are comparing it to the libraries listed below
- Hi-Speed DNN Training with Espresso: Unleashing the Full Potential of Gradient Compression with Near-Optimal Usage Strategies (EuroSys '2…☆15Updated last year
- ☆16Updated 10 months ago
- Artifacts for our SIGCOMM'23 paper Ditto☆15Updated last year
- A minimum demo for PyTorch distributed extension functionality for collectives.☆11Updated 7 months ago
- ☆14Updated 2 years ago
- Reading seminar in Harvard Cloud Networking and Systems Group☆16Updated 2 years ago
- SocksDirect code repository☆19Updated 2 years ago
- Code for "Shockwave: Fair and Efficient Cluster Scheduling for Dynamic Adaptation in Machine Learning" [NSDI '23]☆39Updated 2 years ago
- THC: Accelerating Distributed Deep Learning Using Tensor Homomorphic Compression☆15Updated 7 months ago
- Artifacts for our SIGCOMM'22 paper Muri☆41Updated last year
- Source code for OSDI 2023 paper titled "Cilantro - Performance-Aware Resource Allocation for General Objectives via Online Feedback"☆38Updated last year
- A Cluster-Wide Model Manager to Accelerate DNN Training via Automated Training Warmup☆34Updated 2 years ago
- Artifact for "Apparate: Rethinking Early Exits to Tame Latency-Throughput Tensions in ML Serving" [SOSP '24]☆22Updated 4 months ago
- ☆44Updated 3 years ago
- RPCNIC: A High-Performance and Reconfigurable PCIe-attached RPC Accelerator [HPCA2025]☆10Updated 3 months ago
- Deduplication over dis-aggregated memory for Serverless Computing☆12Updated 3 years ago
- Ok-Topk is a scheme for distributed training with sparse gradients. Ok-Topk integrates a novel sparse allreduce algorithm (less than 6k c…☆25Updated 2 years ago
- ☆49Updated 2 years ago
- Herald: Accelerating Neural Recommendation Training with Embedding Scheduling (NSDI 2024)☆22Updated 10 months ago
- SOTA Learning-augmented Systems☆35Updated 2 years ago
- Open-source implementation for "Helix: Serving Large Language Models over Heterogeneous GPUs and Network via Max-Flow"☆28Updated 4 months ago
- Artifacts for our ASPLOS'23 paper ElasticFlow☆53Updated 10 months ago
- ☆24Updated last year
- A rust-based benchmark for BlueField SmartNICs.☆28Updated last year
- Primo: Practical Learning-Augmented Systems with Interpretable Models☆19Updated last year
- Analyze network performance in distributed training☆18Updated 4 years ago
- Bamboo is a system for running large pipeline-parallel DNNs affordably, reliably, and efficiently using spot instances.☆49Updated 2 years ago
- Official resporitory for "IPDPS' 24 QSync: Quantization-Minimized Synchronous Distributed Training Across Hybrid Devices".☆19Updated last year
- Deferred Continuous Batching in Resource-Efficient Large Language Model Serving (EuroMLSys 2024)☆13Updated 10 months ago
- SHADE: Enable Fundamental Cacheability for Distributed Deep Learning Training☆31Updated 2 years ago