ByteDance-Seed / StragglerAnalysisLinks
☆43Updated 6 months ago
Alternatives and similar repositories for StragglerAnalysis
Users that are interested in StragglerAnalysis are comparing it to the libraries listed below
Sorting:
- Efficient Compute-Communication Overlap for Distributed LLM Inference☆62Updated 3 weeks ago
- Stateful LLM Serving☆88Updated 8 months ago
- A framework for generating realistic LLM serving workloads☆79Updated last month
- ☆79Updated last month
- DLSlime: Flexible & Efficient Heterogeneous Transfer Toolkit☆82Updated last week
- A resilient distributed training framework☆96Updated last year
- Microsoft Collective Communication Library☆66Updated last year
- ☆67Updated 2 months ago
- NEO is a LLM inference engine built to save the GPU memory crisis by CPU offloading☆69Updated 5 months ago
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆144Updated 2 months ago
- Dynamic resources changes for multi-dimensional parallelism training☆29Updated 3 months ago
- ☆57Updated 3 weeks ago
- Official repository for the paper DynaPipe: Optimizing Multi-task Training through Dynamic Pipelines☆20Updated last year
- gLLM: Global Balanced Pipeline Parallelism System for Distributed LLM Serving with Token Throttling☆49Updated this week
- Aims to implement dual-port and multi-qp solutions in deepEP ibrc transport☆67Updated 6 months ago
- DISB is a new DNN inference serving benchmark with diverse workloads and models, as well as real-world traces.☆56Updated last year
- ☆71Updated 10 months ago
- ☆21Updated last year
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆196Updated last year
- A lightweight design for computation-communication overlap.☆187Updated last month
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆132Updated last year
- Artifact of OSDI '24 paper, ”Llumnix: Dynamic Scheduling for Large Language Model Serving“☆63Updated last year
- [NeurIPS 2024] Efficient LLM Scheduling by Learning to Rank☆64Updated last year
- ☆47Updated 11 months ago
- Medusa: Accelerating Serverless LLM Inference with Materialization [ASPLOS'25]☆39Updated 6 months ago
- Artifact for "Apparate: Rethinking Early Exits to Tame Latency-Throughput Tensions in ML Serving" [SOSP '24]☆25Updated last year
- ☆40Updated last year
- Research prototype of PRISM — a cost-efficient multi-LLM serving system with flexible time- and space-based GPU sharing.☆46Updated 3 months ago
- AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving (OSDI 23)☆91Updated 2 years ago
- nnScaler: Compiling DNN models for Parallel Training☆119Updated 2 months ago