UofT-EcoSystem / hftaLinks
Boost hardware utilization for ML training workloads via Inter-model Horizontal Fusion
☆32Updated last year
Alternatives and similar repositories for hfta
Users that are interested in hfta are comparing it to the libraries listed below
Sorting:
- ☆93Updated 2 years ago
- AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving (OSDI 23)☆83Updated 2 years ago
- PipeSwitch: Fast Pipelined Context Switching for Deep Learning Applications☆127Updated 3 years ago
- ☆80Updated 2 years ago
- DISB is a new DNN inference serving benchmark with diverse workloads and models, as well as real-world traces.☆53Updated 11 months ago
- FTPipe and related pipeline model parallelism research.☆41Updated 2 years ago
- Compiler for Dynamic Neural Networks☆46Updated last year
- ☆14Updated 3 years ago
- ☆37Updated last month
- An experimental parallel training platform☆54Updated last year
- ☆43Updated last year
- 🔮 Execution time predictions for deep neural network training iterations across different GPUs.☆63Updated 2 years ago
- An Efficient Pipelined Data Parallel Approach for Training Large Model☆77Updated 4 years ago
- PET: Optimizing Tensor Programs with Partially Equivalent Transformations and Automated Corrections☆121Updated 3 years ago
- Model-less Inference Serving☆90Updated last year
- DietCode Code Release☆64Updated 3 years ago
- A Cluster-Wide Model Manager to Accelerate DNN Training via Automated Training Warmup☆35Updated 2 years ago
- Cavs: An Efficient Runtime System for Dynamic Neural Networks☆14Updated 4 years ago
- Supplemental materials for The ASPLOS 2025 / EuroSys 2025 Contest on Intra-Operator Parallelism for Distributed Deep Learning☆23Updated 2 months ago
- ☆20Updated 3 years ago
- ☆75Updated 4 years ago
- ☆47Updated 2 years ago
- ☆40Updated 4 years ago
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆40Updated 2 years ago
- A schedule language for large model training☆149Updated last year
- Benchmark for matrix multiplications between dense and block sparse (BSR) matrix in TVM, blocksparse (Gray et al.) and cuSparse.☆24Updated 4 years ago
- ☆25Updated last year
- Bamboo is a system for running large pipeline-parallel DNNs affordably, reliably, and efficiently using spot instances.☆50Updated 2 years ago
- ☆22Updated 6 years ago
- A GPU-accelerated DNN inference serving system that supports instant kernel preemption and biased concurrent execution in GPU scheduling.☆42Updated 3 years ago