microsoft / vidur
A large-scale simulation framework for LLM inference
☆361Updated 4 months ago
Alternatives and similar repositories for vidur:
Users that are interested in vidur are comparing it to the libraries listed below
- A low-latency & high-throughput serving engine for LLMs☆341Updated 2 months ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆345Updated 3 weeks ago
- Disaggregated serving system for Large Language Models (LLMs).☆550Updated last week
- PyTorch library for cost-effective, fast and easy serving of MoE models.☆165Updated 2 weeks ago
- Latency and Memory Analysis of Transformer Models for Training and Inference☆403Updated last month
- Analyze the inference of Large Language Models (LLMs). Analyze aspects like computation, storage, transmission, and hardware roofline mod…☆435Updated 7 months ago
- Efficient and easy multi-instance LLM serving☆367Updated this week
- Since the emergence of chatGPT in 2022, the acceleration of Large Language Model has become increasingly important. Here is a list of pap…☆242Updated last month
- ☆97Updated 3 months ago
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆156Updated 9 months ago
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆204Updated last year
- A ChatGPT(GPT-3.5) & GPT-4 Workload Trace to Optimize LLM Serving Systems☆159Updated 6 months ago
- paper and its code for AI System☆293Updated last week
- ☆94Updated 5 months ago
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆153Updated 6 months ago
- LLM Serving Performance Evaluation Harness☆75Updated last month
- High performance Transformer implementation in C++.☆115Updated 2 months ago
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆113Updated last year
- LLM serving cluster simulator☆96Updated 11 months ago
- Materials for learning SGLang☆371Updated 3 weeks ago
- ☆96Updated 6 months ago
- Distributed Triton for Parallel Systems☆372Updated last week
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆303Updated 9 months ago
- ☆56Updated 10 months ago
- nnScaler: Compiling DNN models for Parallel Training☆106Updated 2 months ago
- Curated collection of papers in machine learning systems☆281Updated last week
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆247Updated 5 months ago
- 📰 Must-read papers on KV Cache Compression (constantly updating 🤗).☆367Updated last week
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆635Updated last month
- ☆312Updated last year