zhuohan123 / terapipeView external linksLinks
☆77May 4, 2021Updated 4 years ago
Alternatives and similar repositories for terapipe
Users that are interested in terapipe are comparing it to the libraries listed below
Sorting:
- FTPipe and related pipeline model parallelism research.☆44May 16, 2023Updated 2 years ago
- An experimental parallel training platform☆56Mar 25, 2024Updated last year
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆44Nov 4, 2022Updated 3 years ago
- Official repository for the paper DynaPipe: Optimizing Multi-task Training through Dynamic Pipelines☆19Dec 8, 2023Updated 2 years ago
- ☆82Updated this week
- An Efficient Pipelined Data Parallel Approach for Training Large Model☆76Dec 11, 2020Updated 5 years ago
- Python package for rematerialization-aware gradient checkpointing☆27Oct 31, 2023Updated 2 years ago
- ☆84Dec 2, 2022Updated 3 years ago
- A library for syntactically rewriting Python programs, pronounced (sinner).☆67Feb 22, 2022Updated 3 years ago
- Zero Bubble Pipeline Parallelism☆449May 7, 2025Updated 9 months ago
- [MLSys 2021] IOS: Inter-Operator Scheduler for CNN Acceleration☆199Apr 27, 2022Updated 3 years ago
- nnScaler: Compiling DNN models for Parallel Training☆124Sep 23, 2025Updated 4 months ago
- ☆26Aug 31, 2023Updated 2 years ago
- An Attention Superoptimizer☆22Jan 20, 2025Updated last year
- Efficient and easy multi-instance LLM serving☆527Sep 3, 2025Updated 5 months ago
- An external memory allocator example for PyTorch.☆16Aug 10, 2025Updated 6 months ago
- 🔮 Execution time predictions for deep neural network training iterations across different GPUs.☆64Nov 26, 2022Updated 3 years ago
- ☆28Jul 11, 2021Updated 4 years ago
- ☆25Apr 3, 2023Updated 2 years ago
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆222Aug 19, 2024Updated last year
- Code for "Heterogenity-Aware Cluster Scheduling Policies for Deep Learning Workloads", which appeared at OSDI 2020☆137Jul 25, 2024Updated last year
- Large scale graph learning on a single machine.☆167Feb 25, 2025Updated 11 months ago
- ☆41Oct 12, 2020Updated 5 years ago
- Compiler for Dynamic Neural Networks☆45Nov 13, 2023Updated 2 years ago
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆643Jan 15, 2026Updated 3 weeks ago
- Research and development for optimizing transformers☆131Feb 16, 2021Updated 4 years ago
- A schedule language for large model training☆152Aug 21, 2025Updated 5 months ago
- Tacker: Tensor-CUDA Core Kernel Fusion for Improving the GPU Utilization while Ensuring QoS☆34Feb 10, 2025Updated last year
- ☆145Jan 30, 2025Updated last year
- play gemm with tvm☆91Jul 22, 2023Updated 2 years ago
- Official resporitory for "IPDPS' 24 QSync: Quantization-Minimized Synchronous Distributed Training Across Hybrid Devices".☆20Feb 23, 2024Updated last year
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆70Mar 20, 2025Updated 10 months ago
- ☆13Feb 22, 2023Updated 2 years ago
- A easy general acc.☆18Mar 22, 2021Updated 4 years ago
- A Cluster-Wide Model Manager to Accelerate DNN Training via Automated Training Warmup☆35Jan 9, 2023Updated 3 years ago
- A PIM instrumentation, compilation, execution, simulation, and evaluation repository for BLIMP-style architectures.☆18May 12, 2022Updated 3 years ago
- PipeSwitch: Fast Pipelined Context Switching for Deep Learning Applications☆127May 9, 2022Updated 3 years ago
- PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models. ICML 2021☆55Jul 21, 2021Updated 4 years ago
- SHADE: Enable Fundamental Cacheability for Distributed Deep Learning Training☆36Mar 1, 2023Updated 2 years ago