zhuohan123 / terapipeLinks
☆77Updated 4 years ago
Alternatives and similar repositories for terapipe
Users that are interested in terapipe are comparing it to the libraries listed below
Sorting:
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆67Updated 7 months ago
- nnScaler: Compiling DNN models for Parallel Training☆118Updated last month
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆43Updated 3 years ago
- ☆83Updated 2 years ago
- FTPipe and related pipeline model parallelism research.☆43Updated 2 years ago
- ☆80Updated 5 months ago
- Sequence-level 1F1B schedule for LLMs.☆32Updated 2 months ago
- An experimental parallel training platform☆56Updated last year
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆222Updated 2 years ago
- An Efficient Pipelined Data Parallel Approach for Training Large Model☆76Updated 4 years ago
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆80Updated 11 months ago
- ☆87Updated 3 years ago
- A schedule language for large model training☆151Updated 2 months ago
- Compiler for Dynamic Neural Networks☆46Updated last year
- Official repository for the paper DynaPipe: Optimizing Multi-task Training through Dynamic Pipelines☆20Updated last year
- Supplemental materials for The ASPLOS 2025 / EuroSys 2025 Contest on Intra-Operator Parallelism for Distributed Deep Learning☆23Updated 5 months ago
- AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving (OSDI 23)☆89Updated 2 years ago
- ☆124Updated last year
- ☆158Updated last year
- A baseline repository of Auto-Parallelism in Training Neural Networks☆147Updated 3 years ago
- ☆146Updated 10 months ago
- A lightweight design for computation-communication overlap.☆183Updated last month
- [ICML 2024] Serving LLMs on heterogeneous decentralized clusters.☆31Updated last year
- ☆75Updated 3 weeks ago
- Distributed MoE in a Single Kernel [NeurIPS '25]☆89Updated last month
- A resilient distributed training framework☆96Updated last year
- Github mirror of trition-lang/triton repo.☆98Updated this week
- [IJCAI2023] An automated parallel training system that combines the advantages from both data and model parallelism. If you have any inte…☆52Updated 2 years ago
- ☆43Updated 3 years ago
- PET: Optimizing Tensor Programs with Partially Equivalent Transformations and Automated Corrections☆121Updated 3 years ago