Youhe-Jiang / IJCAI2023-OptimalShardedDataParallelLinks
[IJCAI2023] An automated parallel training system that combines the advantages from both data and model parallelism. If you have any interests, please visit/star/fork https://github.com/Youhe-Jiang/OptimalShardedDataParallel
☆52Updated 2 years ago
Alternatives and similar repositories for IJCAI2023-OptimalShardedDataParallel
Users that are interested in IJCAI2023-OptimalShardedDataParallel are comparing it to the libraries listed below
Sorting:
- ☆77Updated 4 years ago
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆43Updated 3 years ago
- ☆82Updated 7 months ago
- An Efficient Pipelined Data Parallel Approach for Training Large Model☆76Updated 5 years ago
- ☆83Updated 3 years ago
- A resilient distributed training framework☆96Updated last year
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆69Updated 9 months ago
- [ICML 2024] Serving LLMs on heterogeneous decentralized clusters.☆33Updated last year
- nnScaler: Compiling DNN models for Parallel Training☆121Updated 3 months ago
- Dynamic Tensor Rematerialization prototype (modified PyTorch) and simulator. Paper: https://arxiv.org/abs/2006.09616☆133Updated 2 years ago
- AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving (OSDI 23)☆92Updated 2 years ago
- A high-performance distributed deep learning system targeting large-scale and automated distributed training. If you have any interests, …☆122Updated 2 years ago
- ☆43Updated 3 years ago
- An experimental parallel training platform☆56Updated last year
- FTPipe and related pipeline model parallelism research.☆43Updated 2 years ago
- Artifact of OSDI '24 paper, ”Llumnix: Dynamic Scheduling for Large Language Model Serving“☆64Updated last year
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆82Updated last year
- ☆163Updated last year
- ☆88Updated 3 years ago
- Official repository for the paper DynaPipe: Optimizing Multi-task Training through Dynamic Pipelines☆20Updated 2 years ago
- Boost hardware utilization for ML training workloads via Inter-model Horizontal Fusion☆32Updated last year
- ☆38Updated 4 years ago
- PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models. ICML 2021☆55Updated 4 years ago
- A baseline repository of Auto-Parallelism in Training Neural Networks☆147Updated 3 years ago
- PyTorch implementation of paper "Response Length Perception and Sequence Scheduling: An LLM-Empowered LLM Inference Pipeline".☆93Updated 2 years ago
- PET: Optimizing Tensor Programs with Partially Equivalent Transformations and Automated Corrections☆122Updated 3 years ago
- ☆17Updated 3 years ago
- ☆25Updated 2 years ago
- Compiler for Dynamic Neural Networks☆46Updated 2 years ago
- Sequence-level 1F1B schedule for LLMs.☆38Updated 3 months ago