spcl / substation
Research and development for optimizing transformers
☆125Updated 4 years ago
Alternatives and similar repositories for substation:
Users that are interested in substation are comparing it to the libraries listed below
- FTPipe and related pipeline model parallelism research.☆41Updated last year
- ☆72Updated 3 years ago
- Training neural networks in TensorFlow 2.0 with 5x less memory☆130Updated 3 years ago
- A Python library transfers PyTorch tensors between CPU and NVMe☆106Updated 3 months ago
- ☆246Updated 7 months ago
- A schedule language for large model training☆143Updated 8 months ago
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆38Updated 2 years ago
- ☆100Updated 6 months ago
- ☆71Updated 3 months ago
- ☆157Updated last year
- ☆79Updated 3 months ago
- Fairring (FAIR + Herring) is a plug-in for PyTorch that provides a process group for distributed training that outperforms NCCL at large …☆63Updated 2 years ago
- ☆141Updated last month
- An Efficient Pipelined Data Parallel Approach for Training Large Model☆74Updated 4 years ago
- Memory Optimizations for Deep Learning (ICML 2023)☆62Updated 11 months ago
- nnScaler: Compiling DNN models for Parallel Training☆97Updated 2 weeks ago
- PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models. ICML 2021☆56Updated 3 years ago
- Block-sparse primitives for PyTorch☆153Updated 3 years ago
- Simple Distributed Deep Learning on TensorFlow☆134Updated 2 years ago
- ☆47Updated 2 months ago
- ☆75Updated 2 years ago
- ☆44Updated last year
- Fast sparse deep learning on CPUs☆52Updated 2 years ago
- extensible collectives library in triton☆83Updated 5 months ago
- A library of GPU kernels for sparse matrix operations.☆259Updated 4 years ago
- ☆185Updated 7 months ago
- Python package for rematerialization-aware gradient checkpointing☆24Updated last year
- Official repository for LightSeq: Sequence Level Parallelism for Distributed Training of Long Context Transformers☆206Updated 6 months ago
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind…☆154Updated 2 months ago
- Chimera: Efficiently Training Large-Scale Neural Networks with Bidirectional Pipelines.☆59Updated last year