DataStates / datastates-llmLinks
LLM checkpointing for DeepSpeed/Megatron
☆24Updated 2 months ago
Alternatives and similar repositories for datastates-llm
Users that are interested in datastates-llm are comparing it to the libraries listed below
Sorting:
- A resilient distributed training framework☆96Updated last year
- ☆85Updated 3 months ago
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆209Updated last year
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆135Updated last year
- Stateful LLM Serving☆95Updated 11 months ago
- ☆51Updated 9 months ago
- FlexFlow Serve: Low-Latency, High-Performance LLM Serving☆71Updated 4 months ago
- [ICML 2024] Serving LLMs on heterogeneous decentralized clusters.☆34Updated last year
- Microsoft Collective Communication Library☆66Updated last year
- PyTorch library for cost-effective, fast and easy serving of MoE models.☆279Updated last week
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆70Updated 10 months ago
- ☆89Updated 3 years ago
- Dynamic resources changes for multi-dimensional parallelism training☆30Updated 5 months ago
- ☆131Updated last year
- A lightweight design for computation-communication overlap.☆219Updated 3 weeks ago
- [ICLR'25] Fast Inference of MoE Models with CPU-GPU Orchestration☆260Updated last year
- ☆47Updated last year
- Thunder Research Group's Collective Communication Library☆47Updated 7 months ago
- A NCCL extension library, designed to efficiently offload GPU memory allocated by the NCCL communication library.☆90Updated last month
- An interference-aware scheduler for fine-grained GPU sharing☆159Updated 2 months ago
- PyTorch implementation of paper "Response Length Perception and Sequence Scheduling: An LLM-Empowered LLM Inference Pipeline".☆93Updated 2 years ago
- A framework for generating realistic LLM serving workloads☆100Updated 4 months ago
- NEO is a LLM inference engine built to save the GPU memory crisis by CPU offloading☆84Updated 7 months ago
- ☆150Updated last year
- ☆164Updated 6 months ago
- [NeurIPS 2024] Efficient LLM Scheduling by Learning to Rank☆69Updated last year
- torchcomms: a modern PyTorch communications API☆327Updated this week
- High performance Transformer implementation in C++.☆150Updated last year
- High-performance distributed data shuffling (all-to-all) library for MoE training and inference☆112Updated last month
- Distributed MoE in a Single Kernel [NeurIPS '25]☆191Updated this week