msr-fiddle / pipedream
☆388Updated 2 years ago
Alternatives and similar repositories for pipedream:
Users that are interested in pipedream are comparing it to the libraries listed below
- A GPipe implementation in PyTorch☆835Updated 7 months ago
- An Efficient Pipelined Data Parallel Approach for Training Large Model☆74Updated 4 years ago
- A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.☆979Updated 5 months ago
- Lightweight and Parallel Deep Learning Framework☆264Updated 2 years ago
- Microsoft Collective Communication Library☆340Updated last year
- A tensor-aware point-to-point communication primitive for machine learning☆253Updated 2 years ago
- Resource-adaptive cluster scheduler for deep learning training.☆435Updated 2 years ago
- Simple Distributed Deep Learning on TensorFlow☆134Updated 2 years ago
- Fine-grained GPU sharing primitives☆141Updated 5 years ago
- PipeSwitch: Fast Pipelined Context Switching for Deep Learning Applications☆127Updated 2 years ago
- The Tensor Algebra SuperOptimizer for Deep Learning☆703Updated 2 years ago
- A baseline repository of Auto-Parallelism in Training Neural Networks☆143Updated 2 years ago
- NCCL Fast Socket is a transport layer plugin to improve NCCL collective communication performance on Google Cloud.☆116Updated last year
- [MLSys 2021] IOS: Inter-Operator Scheduler for CNN Acceleration☆197Updated 2 years ago
- ☆577Updated 6 years ago
- ☆141Updated last month
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆266Updated last year
- Collective communications library with various primitives for multi-machine training.☆1,273Updated this week
- Python bindings for NVTX☆66Updated last year
- Synthesizer for optimal collective communication algorithms☆104Updated 11 months ago
- Model-less Inference Serving☆85Updated last year
- A library to analyze PyTorch traces.☆342Updated this week
- AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving (OSDI 23)☆82Updated last year
- TVM integration into PyTorch☆452Updated 5 years ago
- Code for "Heterogenity-Aware Cluster Scheduling Policies for Deep Learning Workloads", which appeared at OSDI 2020☆126Updated 7 months ago
- ☆83Updated 2 years ago
- Dive into Deep Learning Compiler☆647Updated 2 years ago
- ☆184Updated 5 years ago
- A GPU performance profiling tool for PyTorch models☆504Updated 3 years ago
- GPU-scheduler-for-deep-learning☆202Updated 4 years ago