cake-lab / perseusLinks
☆10Updated 2 years ago
Alternatives and similar repositories for perseus
Users that are interested in perseus are comparing it to the libraries listed below
Sorting:
- a deep learning-driven scheduler for elastic training in deep learning clusters☆31Updated 4 years ago
- ☆23Updated 3 years ago
- Simple Distributed Deep Learning on TensorFlow☆134Updated 5 months ago
- sensAI: ConvNets Decomposition via Class Parallelism for Fast Inference on Live Data☆65Updated last year
- A Deep Learning Cluster Scheduler☆39Updated 4 years ago
- This is the (evolving) reading list for the seminar.☆60Updated 5 years ago
- Machine Learning System☆14Updated 5 years ago
- Exploiting Cloud Services for Cost-Effective, SLO-Aware Machine Learning Inference Serving☆37Updated 5 years ago
- ddl-benchmarks: Benchmarks for Distributed Deep Learning☆36Updated 5 years ago
- Code for "Solving Large-Scale Granular Resource Allocation Problems Efficiently with POP", which appeared at SOSP 2021☆27Updated 3 years ago
- PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models. ICML 2021☆56Updated 4 years ago
- GPU topology-aware scheduler☆13Updated 8 years ago
- Resource-adaptive cluster scheduler for deep learning training.☆449Updated 2 years ago
- A high-performance distributed deep learning system targeting large-scale and automated distributed training. If you have any interests, …☆123Updated last year
- 📉 Alibaba cluster analysis☆15Updated 7 years ago
- PipeSwitch: Fast Pipelined Context Switching for Deep Learning Applications☆126Updated 3 years ago
- Analyze network performance in distributed training☆19Updated 5 years ago
- Helios Traces from SenseTime☆59Updated 3 years ago
- BytePS examples (Vision, NLP, GAN, etc)☆19Updated 2 years ago
- High performance RDMA-based distributed feature collection component for training GNN model on EXTREMELY large graph☆55Updated 3 years ago
- ☆38Updated 4 years ago
- Tiresias is a GPU cluster manager for distributed deep learning training.☆163Updated 5 years ago
- Fast and Adaptive Distributed Machine Learning for TensorFlow, PyTorch and MindSpore.☆296Updated last year
- 🔮 Execution time predictions for deep neural network training iterations across different GPUs.☆62Updated 2 years ago
- Deadline-based hyperparameter tuning on RayTune.☆31Updated 5 years ago
- An Efficient Dynamic Resource Scheduler for Deep Learning Clusters☆41Updated 8 years ago
- [ICDCS 2023] DeAR: Accelerating Distributed Deep Learning with Fine-Grained All-Reduce Pipelining☆11Updated last year
- 各种深度学习(DL)框架分布式训练,包括:Tensorflow、Tensorflow2、Pytorch、Chainer、Caffe、Mxnet ...☆22Updated 5 years ago
- Surrogate-based Hyperparameter Tuning System☆27Updated 2 years ago
- Fine-grained GPU sharing primitives☆147Updated 3 months ago