spcl / substationLinks
Research and development for optimizing transformers
☆126Updated 4 years ago
Alternatives and similar repositories for substation
Users that are interested in substation are comparing it to the libraries listed below
Sorting:
- FTPipe and related pipeline model parallelism research.☆41Updated 2 years ago
- ☆73Updated 4 years ago
- Training neural networks in TensorFlow 2.0 with 5x less memory☆131Updated 3 years ago
- A schedule language for large model training☆148Updated 11 months ago
- ☆105Updated 9 months ago
- Fairring (FAIR + Herring) is a plug-in for PyTorch that provides a process group for distributed training that outperforms NCCL at large …☆65Updated 3 years ago
- ☆79Updated 2 weeks ago
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆38Updated 2 years ago
- An Efficient Pipelined Data Parallel Approach for Training Large Model☆76Updated 4 years ago
- ☆86Updated 5 months ago
- ☆146Updated 10 months ago
- ☆79Updated 2 years ago
- SparseTIR: Sparse Tensor Compiler for Deep Learning☆138Updated 2 years ago
- PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models. ICML 2021☆56Updated 3 years ago
- ☆43Updated last year
- System for automated integration of deep learning backends.☆48Updated 2 years ago
- extensible collectives library in triton☆87Updated 2 months ago
- A Python library transfers PyTorch tensors between CPU and NVMe☆115Updated 6 months ago
- nnScaler: Compiling DNN models for Parallel Training☆113Updated last month
- ☆143Updated 4 months ago
- ☆250Updated 10 months ago
- PET: Optimizing Tensor Programs with Partially Equivalent Transformations and Automated Corrections☆121Updated 2 years ago
- ☆208Updated 10 months ago
- ☆167Updated 11 months ago
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆63Updated 2 months ago
- Block-sparse primitives for PyTorch☆155Updated 4 years ago
- A library of GPU kernels for sparse matrix operations.☆264Updated 4 years ago
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆208Updated 9 months ago
- CUDA templates for tile-sparse matrix multiplication based on CUTLASS.☆51Updated 7 years ago
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind …☆157Updated 5 months ago