awslabs / slapoLinks
A schedule language for large model training
☆151Updated last month
Alternatives and similar repositories for slapo
Users that are interested in slapo are comparing it to the libraries listed below
Sorting:
- ☆145Updated 8 months ago
- SparseTIR: Sparse Tensor Compiler for Deep Learning☆138Updated 2 years ago
- ☆42Updated 2 years ago
- ☆75Updated 4 years ago
- PET: Optimizing Tensor Programs with Partially Equivalent Transformations and Automated Corrections☆122Updated 3 years ago
- ☆151Updated last year
- ☆23Updated last month
- ☆92Updated 2 years ago
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆40Updated 2 years ago
- Training neural networks in TensorFlow 2.0 with 5x less memory☆135Updated 3 years ago
- System for automated integration of deep learning backends.☆47Updated 3 years ago
- An extention of TVMScript to write simple and high performance GPU kernels with tensorcore.☆51Updated last year
- DietCode Code Release☆65Updated 3 years ago
- ☆83Updated 2 years ago
- Github mirror of trition-lang/triton repo.☆78Updated this week
- Research and development for optimizing transformers☆130Updated 4 years ago
- FTPipe and related pipeline model parallelism research.☆42Updated 2 years ago
- Distributed MoE in a Single Kernel [NeurIPS '25]☆49Updated this week
- nnScaler: Compiling DNN models for Parallel Training☆118Updated last week
- AI and Memory Wall☆220Updated last year
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆66Updated 6 months ago
- A home for the final text of all TVM RFCs.☆107Updated last year
- ☆112Updated last year
- ☆121Updated 9 months ago
- extensible collectives library in triton☆88Updated 6 months ago
- PArametrized Recommendation and Ai Model benchmark is a repository for development of numerous uBenchmarks as well as end to end nets for…☆151Updated last month
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆80Updated 10 months ago
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity