sands-lab / omnireduceLinks
☆68Updated 2 years ago
Alternatives and similar repositories for omnireduce
Users that are interested in omnireduce are comparing it to the libraries listed below
Sorting:
- Hi-Speed DNN Training with Espresso: Unleashing the Full Potential of Gradient Compression with Near-Optimal Usage Strategies (EuroSys '2…☆15Updated 2 years ago
- ☆44Updated 4 years ago
- PipeSwitch: Fast Pipelined Context Switching for Deep Learning Applications☆127Updated 3 years ago
- Artifacts for our SIGCOMM'22 paper Muri☆43Updated 2 years ago
- Artifact for "Shockwave: Fair and Efficient Cluster Scheduling for Dynamic Adaptation in Machine Learning" [NSDI '23]☆46Updated 3 years ago
- Analyze network performance in distributed training☆20Updated 5 years ago
- ☆84Updated 4 years ago
- ☆25Updated 2 years ago
- BytePS examples (Vision, NLP, GAN, etc)☆19Updated 3 years ago
- TACCL: Guiding Collective Algorithm Synthesis using Communication Sketches☆79Updated 2 years ago
- ☆56Updated 4 years ago
- ☆15Updated 3 years ago
- ddl-benchmarks: Benchmarks for Distributed Deep Learning☆36Updated 5 years ago
- A GPU-accelerated DNN inference serving system that supports instant kernel preemption and biased concurrent execution in GPU scheduling.☆44Updated 3 years ago
- Tiresias is a GPU cluster manager for distributed deep learning training.☆164Updated 5 years ago
- The prototype for NSDI paper "NetHint: White-Box Networking for Multi-Tenant Data Centers"☆26Updated last year
- Model-less Inference Serving☆93Updated 2 years ago
- Learning-Based Coded Computation☆47Updated 3 years ago
- A Cluster-Wide Model Manager to Accelerate DNN Training via Automated Training Warmup☆35Updated 3 years ago
- ☆47Updated last year
- ☆16Updated 8 months ago
- Efficient GPU communication over multiple NICs.☆21Updated last month
- DISB is a new DNN inference serving benchmark with diverse workloads and models, as well as real-world traces.☆58Updated last year
- ☆65Updated last year
- ☆83Updated 6 months ago
- Fine-grained GPU sharing primitives☆147Updated 5 months ago
- Ok-Topk is a scheme for distributed training with sparse gradients. Ok-Topk integrates a novel sparse allreduce algorithm (less than 6k c…☆27Updated 3 years ago
- ☆52Updated 3 years ago
- Repository for MLCommons Chakra schema and tools☆39Updated 2 years ago
- ☆25Updated 3 years ago