DataStates / datastates-llm
LLM checkpointing for DeepSpeed/Megatron
☆16Updated last month
Alternatives and similar repositories for datastates-llm
Users that are interested in datastates-llm are comparing it to the libraries listed below
Sorting:
- A resilient distributed training framework☆95Updated last year
- Dynamic resources changes for multi-dimensional parallelism training☆25Updated 6 months ago
- ☆60Updated 11 months ago
- ☆16Updated 2 years ago
- ☆24Updated last year
- A Cluster-Wide Model Manager to Accelerate DNN Training via Automated Training Warmup☆34Updated 2 years ago
- Stateful LLM Serving☆67Updated 2 months ago
- ☆14Updated 3 years ago
- A lightweight design for computation-communication overlap.☆113Updated last week
- Official Repo for "LLM-PQ: Serving LLM on Heterogeneous Clusters with Phase-Aware Partition and Adaptive Quantization"☆31Updated last year
- ☆53Updated 4 years ago
- Official repository for the paper DynaPipe: Optimizing Multi-task Training through Dynamic Pipelines☆17Updated last year
- A Suite for Parallel Inference of Diffusion Transformers (DiTs) on multi-GPU Clusters☆45Updated 9 months ago
- [IJCAI2023] An automated parallel training system that combines the advantages from both data and model parallelism. If you have any inte…☆51Updated last year
- An Attention Superoptimizer☆21Updated 3 months ago
- PipeInfer: Accelerating LLM Inference using Asynchronous Pipelined Speculation☆28Updated 6 months ago
- DISB is a new DNN inference serving benchmark with diverse workloads and models, as well as real-world traces.☆52Updated 8 months ago
- NEO is a LLM inference engine built to save the GPU memory crisis by CPU offloading☆27Updated 2 months ago
- A minimal implementation of vllm.☆40Updated 9 months ago
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆38Updated 2 years ago
- ☆45Updated 10 months ago
- FlexFlow Serve: Low-Latency, High-Performance LLM Serving☆39Updated last week
- ☆9Updated last year
- High performance Transformer implementation in C++.☆122Updated 3 months ago
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆118Updated last year
- Code for MLSys 2024 Paper "SiDA-MoE: Sparsity-Inspired Data-Aware Serving for Efficient and Scalable Large Mixture-of-Experts Models"☆17Updated last year
- PipeRAG: Fast Retrieval-Augmented Generation via Algorithm-System Co-design (KDD 2025)☆20Updated 11 months ago
- [ICML 2024] Serving LLMs on heterogeneous decentralized clusters.☆25Updated last year
- LLM serving cluster simulator☆99Updated last year
- 16-fold memory access reduction with nearly no loss☆93Updated last month