microsoft / sarathi-serveLinks
A low-latency & high-throughput serving engine for LLMs
☆470Updated last month
Alternatives and similar repositories for sarathi-serve
Users that are interested in sarathi-serve are comparing it to the libraries listed below
Sorting:
- Dynamic Memory Management for Serving LLMs without PagedAttention☆458Updated 8 months ago
- Efficient and easy multi-instance LLM serving☆524Updated 5 months ago
- Disaggregated serving system for Large Language Models (LLMs).☆771Updated 10 months ago
- A large-scale simulation framework for LLM inference☆530Updated 6 months ago
- Latency and Memory Analysis of Transformer Models for Training and Inference☆478Updated 9 months ago
- Perplexity GPU Kernels☆560Updated 3 months ago
- ☆131Updated last year
- ☆342Updated last week
- Since the emergence of chatGPT in 2022, the acceleration of Large Language Model has become increasingly important. Here is a list of pap…☆283Updated 11 months ago
- Analyze the inference of Large Language Models (LLMs). Analyze aspects like computation, storage, transmission, and hardware roofline mod…☆617Updated last year
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.☆676Updated last week
- Allow torch tensor memory to be released and resumed later☆216Updated 3 weeks ago
- Stateful LLM Serving☆95Updated 10 months ago
- PyTorch library for cost-effective, fast and easy serving of MoE models.☆279Updated last week
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆233Updated 2 years ago
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆336Updated last year
- Zero Bubble Pipeline Parallelism☆449Updated 9 months ago
- Materials for learning SGLang☆738Updated last month
- A lightweight design for computation-communication overlap.☆219Updated 3 weeks ago
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆313Updated 8 months ago
- nnScaler: Compiling DNN models for Parallel Training☆124Updated 4 months ago
- A ChatGPT(GPT-3.5) & GPT-4 Workload Trace to Optimize LLM Serving Systems☆238Updated last week
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆810Updated 11 months ago
- High performance Transformer implementation in C++.☆150Updated last year
- ☆85Updated 3 months ago
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆135Updated last year
- ☆150Updated last year
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆263Updated this week
- A curated list of awesome projects and papers for distributed training or inference☆265Updated last year
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆123Updated last month