Thesys-lab / parity-models
Learning-Based Coded Computation
☆46Updated last year
Related projects: ⓘ
- Code for "Shockwave: Fair and Efficient Cluster Scheduling for Dynamic Adaptation in Machine Learning" [NSDI '23]☆38Updated last year
- ☆67Updated last year
- ☆48Updated 3 years ago
- ☆41Updated 3 years ago
- Hi-Speed DNN Training with Espresso: Unleashing the Full Potential of Gradient Compression with Near-Optimal Usage Strategies (EuroSys '2…☆12Updated 11 months ago
- Analyze network performance in distributed training☆16Updated 3 years ago
- Bamboo is a system for running large pipeline-parallel DNNs affordably, reliably, and efficiently using spot instances.☆46Updated last year
- Artifacts for our SIGCOMM'22 paper Muri☆38Updated 8 months ago
- Aequitas enables RPC-level QoS in datacenter networks.☆16Updated 2 years ago
- THC: Accelerating Distributed Deep Learning Using Tensor Homomorphic Compression☆12Updated last month
- TRAGEN: A Synthetic Trace Generator for Realistic Cache Simulations☆18Updated 5 months ago
- Dorylus: Affordable, Scalable, and Accurate GNN Training☆77Updated 3 years ago
- ☆13Updated 2 years ago
- Code for reproducing experiments performed for Accoridon☆12Updated 3 years ago
- A Federated Execution Engine for Fast Distributed Computation Over Slow Networks☆25Updated 3 years ago
- A Cluster-Wide Model Manager to Accelerate DNN Training via Automated Training Warmup☆31Updated last year
- The prototype for NSDI paper "NetHint: White-Box Networking for Multi-Tenant Data Centers"☆25Updated 7 months ago
- Cupcake: A Compression Scheduler for Scalable Communication-Efficient Distributed Training (MLSys '23)☆8Updated last year
- ☆19Updated last year
- A Generic Resource-Aware Hyperparameter Tuning Execution Engine☆15Updated 2 years ago
- ☆14Updated 2 years ago
- SHADE: Enable Fundamental Cacheability for Distributed Deep Learning Training☆28Updated last year
- This repository contains code for the paper: Bergsma S., Zeyl T., Senderovich A., and Beck J. C., "Generating Complex, Realistic Cloud Wo…☆42Updated 2 years ago
- Phoenix dataplane system service☆51Updated 3 months ago
- Thinking is hard - automate it☆18Updated 2 years ago
- ☆23Updated last year
- Virtual Memory Abstraction for Serverless Architectures☆45Updated 2 years ago
- Ok-Topk is a scheme for distributed training with sparse gradients. Ok-Topk integrates a novel sparse allreduce algorithm (less than 6k c…☆23Updated last year
- NetLock: Fast, Centralized Lock Management Using Programmable Switches☆28Updated 4 years ago
- ☆25Updated 2 months ago