A framework for generating realistic LLM serving workloads
☆116Oct 9, 2025Updated 6 months ago
Alternatives and similar repositories for ServeGen
Users that are interested in ServeGen are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆22Sep 10, 2025Updated 7 months ago
- Research prototype of PRISM — a cost-efficient multi-LLM serving system with flexible time- and space-based GPU sharing.☆61Mar 17, 2026Updated last month
- Asynchronous pipeline parallel optimization☆21Feb 2, 2026Updated 3 months ago
- ☆178Mar 12, 2024Updated 2 years ago
- TraceWeaver is a research prototype for transparently tracing requests through a microservice without application instrumentation.☆23Sep 2, 2024Updated last year
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Adaptive Message Quantization and Parallelization for Distributed Full-graph GNN Training☆24Mar 1, 2024Updated 2 years ago
- A ChatGPT(GPT-3.5) & GPT-4 Workload Trace to Optimize LLM Serving Systems☆256Mar 19, 2026Updated last month
- A parallelism VAE avoids OOM for high resolution image generation☆91Apr 21, 2026Updated last week
- Prototyp MegaScale-Infer: Serving Mixture-of-Experts at Scale with Disaggregated Expert Parallelism☆29Apr 4, 2025Updated last year
- Easy, Fast, and Scalable Multimodal AI☆124Apr 17, 2026Updated 2 weeks ago
- High-performance KV cache storage for LLM inference — GPU offloading, SSD caching, and cross-node sharing via RDMA. Works with vLLM and S…☆46Updated this week
- Sample Codes using NVSHMEM on Multi-GPU☆30Jan 22, 2023Updated 3 years ago
- ☆66Jun 25, 2024Updated last year
- Website for CSE 234, Winter 2025☆14Mar 24, 2025Updated last year
- Serverless GPU API endpoints on Runpod - Get Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆134Feb 22, 2024Updated 2 years ago
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆180Feb 11, 2026Updated 2 months ago
- ACM SoCC 2019, "Coupling Decentralized Key-Value Stores with Erasure Coding"☆15May 22, 2021Updated 4 years ago
- Releasing the spot availability traces used in "Can't Be Late" paper.☆26Mar 31, 2024Updated 2 years ago
- An auxiliary project analysis of the characteristics of KV in DiT Attention.☆34Nov 29, 2024Updated last year
- ☆21Jun 9, 2025Updated 10 months ago
- A low-latency & high-throughput serving engine for LLMs☆496Jan 8, 2026Updated 3 months ago
- Source code for OSDI 2023 paper titled "Cilantro - Performance-Aware Resource Allocation for General Objectives via Online Feedback"☆40Jul 6, 2023Updated 2 years ago
- Efficient Long-context Language Model Training by Core Attention Disaggregation☆98Apr 7, 2026Updated 3 weeks ago
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- ☆28Feb 15, 2025Updated last year
- UniVid: The Open-Source Unified Video Model☆32Oct 13, 2025Updated 6 months ago
- SCIONLab user interface and administration☆10Mar 22, 2026Updated last month
- Vibe Coding A GPGPU via Cursor + Gemini3 Pro☆82Nov 23, 2025Updated 5 months ago
- Official repository for the paper DynaPipe: Optimizing Multi-task Training through Dynamic Pipelines☆19Dec 8, 2023Updated 2 years ago
- ☆18Apr 22, 2025Updated last year
- ☆88Oct 17, 2025Updated 6 months ago
- Predict the performance of LLM inference services☆23Sep 18, 2025Updated 7 months ago
- The ASPLOS 2025 / EuroSys 2025 Contest Track☆40Apr 4, 2026Updated 3 weeks ago
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- Implementation from scratch in C of the Multi-head latent attention used in the Deepseek-v3 technical paper.☆18Jan 15, 2025Updated last year
- Implement some method of LLM KV Cache Sparsity☆40Jun 6, 2024Updated last year
- Using Monte Carlo method to find a minimal cut set (MCS) for a fault tree.☆15Apr 3, 2017Updated 9 years ago
- ☆27Aug 31, 2023Updated 2 years ago
- ☆13Oct 13, 2021Updated 4 years ago
- A caching framework for microservice applications☆24Apr 22, 2024Updated 2 years ago
- Disaggregated serving system for Large Language Models (LLMs).☆804Apr 6, 2025Updated last year