awslabs / slapo
A schedule language for large model training
☆146Updated 10 months ago
Alternatives and similar repositories for slapo:
Users that are interested in slapo are comparing it to the libraries listed below
- SparseTIR: Sparse Tensor Compiler for Deep Learning☆136Updated 2 years ago
- ☆142Updated 3 months ago
- ☆43Updated last year
- PET: Optimizing Tensor Programs with Partially Equivalent Transformations and Automated Corrections☆121Updated 2 years ago
- System for automated integration of deep learning backends.☆48Updated 2 years ago
- ☆79Updated 2 years ago
- An extention of TVMScript to write simple and high performance GPU kernels with tensorcore.☆50Updated 9 months ago
- ☆72Updated 4 years ago
- ☆143Updated 9 months ago
- ☆92Updated 2 years ago
- ☆23Updated 5 months ago
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆38Updated 2 years ago
- Memory Optimizations for Deep Learning (ICML 2023)☆64Updated last year
- Training neural networks in TensorFlow 2.0 with 5x less memory☆131Updated 3 years ago
- Research and development for optimizing transformers☆126Updated 4 years ago
- extensible collectives library in triton☆86Updated last month
- ☆79Updated 6 months ago
- DietCode Code Release☆64Updated 2 years ago
- FTPipe and related pipeline model parallelism research.☆41Updated last year
- ☆104Updated 8 months ago
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆78Updated 5 months ago
- MLIR-based partitioning system☆82Updated this week
- [MLSys 2021] IOS: Inter-Operator Scheduler for CNN Acceleration☆198Updated 3 years ago
- ☆70Updated 4 months ago
- A Python-embedded DSL that makes it easy to write fast, scalable ML kernels with minimal boilerplate.☆102Updated this week
- nnScaler: Compiling DNN models for Parallel Training☆110Updated last week
- A home for the final text of all TVM RFCs.☆102Updated 7 months ago
- A Vectorized N:M Format for Unleashing the Power of Sparse Tensor Cores☆51Updated last year
- An Efficient Pipelined Data Parallel Approach for Training Large Model☆76Updated 4 years ago
- A baseline repository of Auto-Parallelism in Training Neural Networks☆144Updated 2 years ago