spcl / substation
Research and development for optimizing transformers
☆125Updated 3 years ago
Related projects ⓘ
Alternatives and complementary repositories for substation
- FTPipe and related pipeline model parallelism research.☆41Updated last year
- ☆65Updated 3 years ago
- Training neural networks in TensorFlow 2.0 with 5x less memory☆129Updated 2 years ago
- ☆131Updated 3 months ago
- ☆88Updated 2 months ago
- A schedule language for large model training☆141Updated 5 months ago
- A Python library transfers PyTorch tensors between CPU and NVMe☆98Updated last week
- ☆167Updated 4 months ago
- A library of GPU kernels for sparse matrix operations.☆249Updated 3 years ago
- SparseTIR: Sparse Tensor Compiler for Deep Learning☆131Updated last year
- ☆140Updated last year
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆34Updated 2 years ago
- extensible collectives library in triton☆71Updated last month
- ☆148Updated 5 months ago
- Benchmark code for the "Online normalizer calculation for softmax" paper☆59Updated 6 years ago
- nnScaler: Compiling DNN models for Parallel Training☆74Updated 3 weeks ago
- ☆44Updated last year
- ☆236Updated 3 months ago
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind…☆146Updated 2 weeks ago
- Fast sparse deep learning on CPUs☆51Updated 2 years ago
- Block-sparse primitives for PyTorch☆148Updated 3 years ago
- Memory Optimizations for Deep Learning (ICML 2023)☆60Updated 8 months ago
- ☆74Updated last month
- Fairring (FAIR + Herring) is a plug-in for PyTorch that provides a process group for distributed training that outperforms NCCL at large …☆63Updated 2 years ago
- Applied AI experiments and examples for PyTorch☆166Updated 3 weeks ago
- PET: Optimizing Tensor Programs with Partially Equivalent Transformations and Automated Corrections☆114Updated 2 years ago
- ☆45Updated 2 weeks ago
- System for automated integration of deep learning backends.☆48Updated 2 years ago
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆180Updated last year
- An Efficient Pipelined Data Parallel Approach for Training Large Model☆70Updated 3 years ago