sands-lab / omnireduceLinks
☆68Updated 2 years ago
Alternatives and similar repositories for omnireduce
Users that are interested in omnireduce are comparing it to the libraries listed below
Sorting:
- ☆44Updated 4 years ago
- PipeSwitch: Fast Pipelined Context Switching for Deep Learning Applications☆126Updated 3 years ago
- Hi-Speed DNN Training with Espresso: Unleashing the Full Potential of Gradient Compression with Near-Optimal Usage Strategies (EuroSys '2…☆15Updated 2 years ago
- ☆84Updated 3 years ago
- ddl-benchmarks: Benchmarks for Distributed Deep Learning☆36Updated 5 years ago
- BytePS examples (Vision, NLP, GAN, etc)☆19Updated 3 years ago
- Artifacts for our SIGCOMM'22 paper Muri☆44Updated last year
- ☆15Updated 3 years ago
- ☆82Updated 5 months ago
- A GPU-accelerated DNN inference serving system that supports instant kernel preemption and biased concurrent execution in GPU scheduling.☆43Updated 3 years ago
- ☆25Updated 2 years ago
- Analyze network performance in distributed training☆19Updated 5 years ago
- ☆57Updated 4 years ago
- DISB is a new DNN inference serving benchmark with diverse workloads and models, as well as real-world traces.☆57Updated last year
- REEF is a GPU-accelerated DNN inference serving system that enables instant kernel preemption and biased concurrent execution in GPU sche…☆103Updated 2 years ago
- TACCL: Guiding Collective Algorithm Synthesis using Communication Sketches☆77Updated 2 years ago
- Artifact for "Shockwave: Fair and Efficient Cluster Scheduling for Dynamic Adaptation in Machine Learning" [NSDI '23]☆46Updated 3 years ago
- Model-less Inference Serving☆92Updated 2 years ago
- The prototype for NSDI paper "NetHint: White-Box Networking for Multi-Tenant Data Centers"☆26Updated last year
- Ensō is a high-performance streaming interface for NIC-application communication.☆76Updated 3 months ago
- AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving (OSDI 23)☆91Updated 2 years ago
- Tiresias is a GPU cluster manager for distributed deep learning training.☆164Updated 5 years ago
- ☆17Updated 2 years ago
- Ok-Topk is a scheme for distributed training with sparse gradients. Ok-Topk integrates a novel sparse allreduce algorithm (less than 6k c…☆27Updated 2 years ago
- ☆43Updated last year
- ☆16Updated last year
- SOTA Learning-augmented Systems☆37Updated 3 years ago
- Official resporitory for "IPDPS' 24 QSync: Quantization-Minimized Synchronous Distributed Training Across Hybrid Devices".☆20Updated last year
- Managed collective communication service☆22Updated last year
- Justitia provides RDMA isolation between applications with diverse requirements.☆42Updated 3 years ago