awslabs / slapoLinks
A schedule language for large model training
☆152Updated 4 months ago
Alternatives and similar repositories for slapo
Users that are interested in slapo are comparing it to the libraries listed below
Sorting:
- ☆145Updated 11 months ago
- ☆42Updated 2 years ago
- ☆77Updated 4 years ago
- ☆23Updated 4 months ago
- SparseTIR: Sparse Tensor Compiler for Deep Learning☆141Updated 2 years ago
- PET: Optimizing Tensor Programs with Partially Equivalent Transformations and Automated Corrections☆122Updated 3 years ago
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆43Updated 3 years ago
- An extention of TVMScript to write simple and high performance GPU kernels with tensorcore.☆51Updated last year
- System for automated integration of deep learning backends.☆47Updated 3 years ago
- ☆92Updated 3 years ago
- ☆84Updated 3 years ago
- DietCode Code Release☆65Updated 3 years ago
- ☆164Updated last year
- Training neural networks in TensorFlow 2.0 with 5x less memory☆137Updated 3 years ago
- Github mirror of trition-lang/triton repo.☆111Updated last week
- FTPipe and related pipeline model parallelism research.☆43Updated 2 years ago
- Research and development for optimizing transformers☆131Updated 4 years ago
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆69Updated 9 months ago
- nnScaler: Compiling DNN models for Parallel Training☆121Updated 3 months ago
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆82Updated last year
- AI and Memory Wall☆225Updated last year
- A Python library transfers PyTorch tensors between CPU and NVMe☆123Updated last year
- A home for the final text of all TVM RFCs.☆108Updated last year
- Microsoft Collective Communication Library☆66Updated last year
- ☆115Updated last year
- extensible collectives library in triton☆91Updated 9 months ago
- A baseline repository of Auto-Parallelism in Training Neural Networks☆147Updated 3 years ago
- Boost hardware utilization for ML training workloads via Inter-model Horizontal Fusion☆32Updated last year
- ☆82Updated 7 months ago
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆230Updated 2 years ago