siddharth9820 / MoDNNLinks
Implementation of algorithms for memory optimized deep neural network training
☆10Updated 5 years ago
Alternatives and similar repositories for MoDNN
Users that are interested in MoDNN are comparing it to the libraries listed below
Sorting:
- ☆41Updated 5 years ago
- A Portable C Library for Distributed CNN Inference on IoT Edge Clusters☆87Updated 5 years ago
- Boost hardware utilization for ML training workloads via Inter-model Horizontal Fusion☆32Updated last year
- Source code for the paper: "A Latency-Predictable Multi-Dimensional Optimization Framework forDNN-driven Autonomous Systems"☆22Updated 4 years ago
- Model-less Inference Serving☆92Updated 2 years ago
- RL-Scope: Cross-Stack Profiling for Deep Reinforcement Learning Workloads☆46Updated 4 years ago
- A Cluster-Wide Model Manager to Accelerate DNN Training via Automated Training Warmup☆35Updated 2 years ago
- Code for reproducing experiments performed for Accoridon☆13Updated 4 years ago
- Dual-way gradient sparsification approach for async DNN training, based on PyTorch.☆11Updated 3 years ago
- ☆44Updated 4 years ago
- ☆68Updated 2 years ago
- Code for "Heterogenity-Aware Cluster Scheduling Policies for Deep Learning Workloads", which appeared at OSDI 2020☆134Updated last year
- PipeSwitch: Fast Pipelined Context Switching for Deep Learning Applications☆126Updated 3 years ago
- PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models. ICML 2021☆56Updated 4 years ago
- GRACE - GRAdient ComprEssion for distributed deep learning☆139Updated last year
- ☆51Updated 3 years ago
- ddl-benchmarks: Benchmarks for Distributed Deep Learning☆36Updated 5 years ago
- Code for "Solving Large-Scale Granular Resource Allocation Problems Efficiently with POP", which appeared at SOSP 2021☆28Updated 3 years ago
- Hi-Speed DNN Training with Espresso: Unleashing the Full Potential of Gradient Compression with Near-Optimal Usage Strategies (EuroSys '2…☆15Updated 2 years ago
- Ok-Topk is a scheme for distributed training with sparse gradients. Ok-Topk integrates a novel sparse allreduce algorithm (less than 6k c…☆27Updated 3 years ago
- MobiSys#114☆22Updated 2 years ago
- ☆38Updated 5 months ago
- THC: Accelerating Distributed Deep Learning Using Tensor Homomorphic Compression☆20Updated last year
- This is a list of awesome edgeAI inference related papers.☆97Updated last year
- ☆38Updated 4 years ago
- ☆14Updated 3 years ago
- 🔮 Execution time predictions for deep neural network training iterations across different GPUs.☆63Updated 3 years ago
- Machine Learning System☆14Updated 5 years ago
- Source code of IPA, https://escholarship.org/uc/item/2p0805dq☆12Updated last year
- [ICDCS 2023] DeAR: Accelerating Distributed Deep Learning with Fine-Grained All-Reduce Pipelining☆12Updated 2 years ago