HPDL-Group / MerakLinks
☆82Updated 7 months ago
Alternatives and similar repositories for Merak
Users that are interested in Merak are comparing it to the libraries listed below
Sorting:
- ☆77Updated 4 years ago
- nnScaler: Compiling DNN models for Parallel Training☆121Updated 3 months ago
- A baseline repository of Auto-Parallelism in Training Neural Networks☆147Updated 3 years ago
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆69Updated 9 months ago
- ☆83Updated 3 years ago
- An Efficient Pipelined Data Parallel Approach for Training Large Model☆76Updated 5 years ago
- ☆163Updated last year
- ☆151Updated 11 months ago
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆43Updated 3 years ago
- PET: Optimizing Tensor Programs with Partially Equivalent Transformations and Automated Corrections☆122Updated 3 years ago
- FTPipe and related pipeline model parallelism research.☆43Updated 2 years ago
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆230Updated 2 years ago
- AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving (OSDI 23)☆92Updated 2 years ago
- Synthesizer for optimal collective communication algorithms☆121Updated last year
- A lightweight design for computation-communication overlap.☆196Updated 2 months ago
- ☆92Updated 3 years ago
- LLM serving cluster simulator☆127Updated last year
- An experimental parallel training platform☆56Updated last year
- DietCode Code Release☆64Updated 3 years ago
- ☆165Updated last year
- ☆145Updated 10 months ago
- LLM training technologies developed by kwai☆67Updated 3 weeks ago
- An extention of TVMScript to write simple and high performance GPU kernels with tensorcore.☆51Updated last year
- A ChatGPT(GPT-3.5) & GPT-4 Workload Trace to Optimize LLM Serving Systems☆226Updated 5 months ago
- ☆41Updated last year
- Zero Bubble Pipeline Parallelism☆442Updated 7 months ago
- A resilient distributed training framework☆96Updated last year
- ☆46Updated last year
- [MLSys 2021] IOS: Inter-Operator Scheduler for CNN Acceleration☆200Updated 3 years ago
- Boost hardware utilization for ML training workloads via Inter-model Horizontal Fusion☆32Updated last year