siddharth9820 / MoDNNLinks
Implementation of algorithms for memory optimized deep neural network training
☆10Updated 4 years ago
Alternatives and similar repositories for MoDNN
Users that are interested in MoDNN are comparing it to the libraries listed below
Sorting:
- ☆40Updated 4 years ago
- ☆10Updated 4 years ago
- Boost hardware utilization for ML training workloads via Inter-model Horizontal Fusion☆32Updated last year
- Code for reproducing experiments performed for Accoridon☆13Updated 4 years ago
- A Cluster-Wide Model Manager to Accelerate DNN Training via Automated Training Warmup☆35Updated 2 years ago
- Source code for the paper: "A Latency-Predictable Multi-Dimensional Optimization Framework forDNN-driven Autonomous Systems"☆22Updated 4 years ago
- Hi-Speed DNN Training with Espresso: Unleashing the Full Potential of Gradient Compression with Near-Optimal Usage Strategies (EuroSys '2…☆15Updated last year
- Ok-Topk is a scheme for distributed training with sparse gradients. Ok-Topk integrates a novel sparse allreduce algorithm (less than 6k c…☆26Updated 2 years ago
- RL-Scope: Cross-Stack Profiling for Deep Reinforcement Learning Workloads☆44Updated 4 years ago
- ☆19Updated 3 years ago
- Source code and datasets for Ekya, a system for continuous learning on the edge.☆106Updated 3 years ago
- PipeSwitch: Fast Pipelined Context Switching for Deep Learning Applications☆127Updated 3 years ago
- GRACE - GRAdient ComprEssion for distributed deep learning☆140Updated 11 months ago
- A Portable C Library for Distributed CNN Inference on IoT Edge Clusters☆82Updated 5 years ago
- PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models. ICML 2021☆56Updated 3 years ago
- Source code of IPA, https://escholarship.org/uc/item/2p0805dq☆10Updated last year
- ☆15Updated 11 months ago
- AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving (OSDI 23)☆82Updated 2 years ago
- Dual-way gradient sparsification approach for async DNN training, based on PyTorch.☆11Updated 2 years ago
- ☆14Updated 3 years ago
- ☆47Updated 2 years ago
- Model-less Inference Serving☆88Updated last year
- We present a set of all-reduce compatible gradient compression algorithms which significantly reduce the communication overhead while mai…☆10Updated 3 years ago
- Layer-wise Sparsification of Distributed Deep Learning☆10Updated 5 years ago
- ☆22Updated 3 years ago
- ☆37Updated 3 weeks ago
- a deep learning-driven scheduler for elastic training in deep learning clusters☆30Updated 4 years ago
- [ICLR 2018] Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training☆222Updated last year
- [ACM SIGCOMM 2024] "m3: Accurate Flow-Level Performance Estimation using Machine Learning" by Chenning Li, Arash Nasr-Esfahany, Kevin Zha…☆24Updated 9 months ago
- Surrogate-based Hyperparameter Tuning System☆28Updated 2 years ago