A framework for generating realistic LLM serving workloads
☆103Oct 9, 2025Updated 4 months ago
Alternatives and similar repositories for ServeGen
Users that are interested in ServeGen are comparing it to the libraries listed below
Sorting:
- Asynchronous pipeline parallel optimization☆19Feb 2, 2026Updated last month
- Research prototype of PRISM — a cost-efficient multi-LLM serving system with flexible time- and space-based GPU sharing.☆58Aug 15, 2025Updated 6 months ago
- Deferred Continuous Batching in Resource-Efficient Large Language Model Serving (EuroMLSys 2024)☆19May 28, 2024Updated last year
- TraceWeaver is a research prototype for transparently tracing requests through a microservice without application instrumentation.☆23Sep 2, 2024Updated last year
- A parallelism VAE avoids OOM for high resolution image generation☆85Aug 4, 2025Updated 6 months ago
- Easy, Fast, and Scalable Multimodal AI☆113Feb 8, 2026Updated 3 weeks ago
- ☆19Sep 10, 2025Updated 5 months ago
- A ChatGPT(GPT-3.5) & GPT-4 Workload Trace to Optimize LLM Serving Systems☆241Feb 1, 2026Updated last month
- Implementation from scratch in C of the Multi-head latent attention used in the Deepseek-v3 technical paper.☆18Jan 15, 2025Updated last year
- An auxiliary project analysis of the characteristics of KV in DiT Attention.☆33Nov 29, 2024Updated last year
- ☆175Mar 12, 2024Updated last year
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆135Feb 22, 2024Updated 2 years ago
- ☆20Jun 9, 2025Updated 8 months ago
- Official repository for the paper DynaPipe: Optimizing Multi-task Training through Dynamic Pipelines☆19Dec 8, 2023Updated 2 years ago
- ☆16Apr 22, 2025Updated 10 months ago
- AI model training on heterogeneous, geo-distributed resources☆37Nov 24, 2025Updated 3 months ago
- PaperHelper: Knowledge-Based LLM QA Paper Reading Assistant with Reliable References☆20Jun 13, 2024Updated last year
- Source code for OSDI 2023 paper titled "Cilantro - Performance-Aware Resource Allocation for General Objectives via Online Feedback"☆40Jul 6, 2023Updated 2 years ago
- Predict the performance of LLM inference services☆21Sep 18, 2025Updated 5 months ago
- A caching framework for microservice applications☆24Apr 22, 2024Updated last year
- Releasing the spot availability traces used in "Can't Be Late" paper.☆25Mar 31, 2024Updated last year
- Efficient Long-context Language Model Training by Core Attention Disaggregation☆91Feb 23, 2026Updated last week
- ☆25Nov 11, 2025Updated 3 months ago
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆161Oct 13, 2025Updated 4 months ago
- A low-latency & high-throughput serving engine for LLMs☆480Jan 8, 2026Updated last month
- Adaptive Message Quantization and Parallelization for Distributed Full-graph GNN Training☆24Mar 1, 2024Updated 2 years ago
- Prototyp MegaScale-Infer: Serving Mixture-of-Experts at Scale with Disaggregated Expert Parallelism☆27Apr 4, 2025Updated 10 months ago
- [OSDI 2024] Motor: Enabling Multi-Versioning for Distributed Transactions on Disaggregated Memory☆50Mar 3, 2024Updated last year
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆163Feb 11, 2026Updated 2 weeks ago
- Query-Adaptive Vector Search☆68Feb 13, 2026Updated 2 weeks ago
- Artifact for "Shockwave: Fair and Efficient Cluster Scheduling for Dynamic Adaptation in Machine Learning" [NSDI '23]☆47Nov 24, 2022Updated 3 years ago
- Sample Codes using NVSHMEM on Multi-GPU☆30Jan 22, 2023Updated 3 years ago
- K8sSim:A Kubernetes cluster simualtor☆22Feb 9, 2023Updated 3 years ago
- Mixed precision training from scratch with Tensors and CUDA☆28May 14, 2024Updated last year
- Surrogate-based Hyperparameter Tuning System☆28Jun 29, 2023Updated 2 years ago
- ☆64Jun 25, 2024Updated last year
- AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving (OSDI 23)☆94Jul 14, 2023Updated 2 years ago
- A benchmark suite for evaluating FaaS scheduler.☆23Nov 5, 2022Updated 3 years ago
- Artifact for "Apparate: Rethinking Early Exits to Tame Latency-Throughput Tensions in ML Serving" [SOSP '24]☆24Nov 21, 2024Updated last year