netx-repo / training-bottleneck
Analyze network performance in distributed training
☆17Updated 4 years ago
Alternatives and similar repositories for training-bottleneck:
Users that are interested in training-bottleneck are comparing it to the libraries listed below
- PipeSwitch: Fast Pipelined Context Switching for Deep Learning Applications☆127Updated 2 years ago
- BytePS examples (Vision, NLP, GAN, etc)☆19Updated 2 years ago
- Tiresias is a GPU cluster manager for distributed deep learning training.☆151Updated 4 years ago
- ☆21Updated 2 years ago
- ☆22Updated 5 years ago
- ☆53Updated 4 years ago
- ☆37Updated 3 years ago
- Code for "Shockwave: Fair and Efficient Cluster Scheduling for Dynamic Adaptation in Machine Learning" [NSDI '23]☆39Updated 2 years ago
- A GPU-accelerated DNN inference serving system that supports instant kernel preemption and biased concurrent execution in GPU scheduling.☆40Updated 2 years ago
- ☆83Updated 2 years ago
- Bamboo is a system for running large pipeline-parallel DNNs affordably, reliably, and efficiently using spot instances.☆48Updated 2 years ago
- Artifacts for our ASPLOS'23 paper ElasticFlow☆53Updated 9 months ago
- ☆48Updated 2 years ago
- Artifacts for our SIGCOMM'22 paper Muri☆41Updated last year
- ☆43Updated 3 years ago
- Fine-grained GPU sharing primitives☆141Updated 4 years ago
- Helios Traces from SenseTime☆53Updated 2 years ago
- ☆35Updated 4 years ago
- Code for "Heterogenity-Aware Cluster Scheduling Policies for Deep Learning Workloads", which appeared at OSDI 2020☆126Updated 6 months ago
- An Efficient Dynamic Resource Scheduler for Deep Learning Clusters☆42Updated 7 years ago
- TACCL: Guiding Collective Algorithm Synthesis using Communication Sketches☆69Updated last year
- An Efficient Pipelined Data Parallel Approach for Training Large Model☆73Updated 4 years ago
- [NSDI 2023] TopoOpt: Optimizing the Network Topology for Distributed DNN Training☆27Updated 5 months ago
- ☆69Updated last year
- Model-less Inference Serving☆84Updated last year
- Hi-Speed DNN Training with Espresso: Unleashing the Full Potential of Gradient Compression with Near-Optimal Usage Strategies (EuroSys '2…☆15Updated last year
- ☆185Updated 5 years ago
- ☆14Updated 2 years ago
- ☆20Updated 3 years ago
- Ultra | Ultimate | Unified CCL☆32Updated this week