DataStates / datastates-llm
LLM checkpointing for DeepSpeed/Megatron
☆16Updated last week
Alternatives and similar repositories for datastates-llm:
Users that are interested in datastates-llm are comparing it to the libraries listed below
- A resilient distributed training framework☆93Updated 11 months ago
- ☆55Updated 9 months ago
- ☆24Updated last year
- Dynamic resources changes for multi-dimensional parallelism training☆24Updated 4 months ago
- A Cluster-Wide Model Manager to Accelerate DNN Training via Automated Training Warmup☆34Updated 2 years ago
- nnScaler: Compiling DNN models for Parallel Training☆103Updated last month
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆112Updated last year
- Stateful LLM Serving☆50Updated 3 weeks ago
- [ICML 2024] Serving LLMs on heterogeneous decentralized clusters.☆23Updated 10 months ago
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆38Updated 2 years ago
- ☆81Updated 3 years ago
- [IJCAI2023] An automated parallel training system that combines the advantages from both data and model parallelism. If you have any inte…☆51Updated last year
- ☆53Updated 4 years ago
- Official Repo for "LLM-PQ: Serving LLM on Heterogeneous Clusters with Phase-Aware Partition and Adaptive Quantization"☆29Updated last year
- ☆16Updated 2 years ago
- Official repository for the paper DynaPipe: Optimizing Multi-task Training through Dynamic Pipelines☆17Updated last year
- ☆91Updated 4 months ago
- ☆72Updated 3 years ago
- FlexFlow Serve: Low-Latency, High-Performance LLM Serving☆32Updated this week
- An interference-aware scheduler for fine-grained GPU sharing☆129Updated 2 months ago
- A ChatGPT(GPT-3.5) & GPT-4 Workload Trace to Optimize LLM Serving Systems☆156Updated 5 months ago
- ☆14Updated 2 years ago
- DISB is a new DNN inference serving benchmark with diverse workloads and models, as well as real-world traces.☆53Updated 7 months ago
- ☆45Updated 9 months ago
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆61Updated last week
- High performance Transformer implementation in C++.☆113Updated 2 months ago
- A minimal implementation of vllm.☆37Updated 8 months ago
- LLM serving cluster simulator☆95Updated 11 months ago
- [NeurIPS 2024] Efficient LLM Scheduling by Learning to Rank☆43Updated 4 months ago
- Code for MLSys 2024 Paper "SiDA-MoE: Sparsity-Inspired Data-Aware Serving for Efficient and Scalable Large Mixture-of-Experts Models"☆16Updated 11 months ago