microsoft / vattentionLinks
Dynamic Memory Management for Serving LLMs without PagedAttention
☆434Updated 5 months ago
Alternatives and similar repositories for vattention
Users that are interested in vattention are comparing it to the libraries listed below
Sorting:
- A low-latency & high-throughput serving engine for LLMs☆440Updated 3 weeks ago
- Efficient and easy multi-instance LLM serving☆506Updated 2 months ago
- Perplexity GPU Kernels☆519Updated 2 weeks ago
- Latency and Memory Analysis of Transformer Models for Training and Inference☆461Updated 6 months ago
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.☆460Updated this week
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆775Updated 8 months ago
- Zero Bubble Pipeline Parallelism☆433Updated 6 months ago
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆222Updated 2 years ago
- PyTorch library for cost-effective, fast and easy serving of MoE models.☆257Updated 3 weeks ago
- ☆312Updated this week
- Allow torch tensor memory to be released and resumed later☆164Updated last week
- Disaggregated serving system for Large Language Models (LLMs).☆721Updated 7 months ago
- A lightweight design for computation-communication overlap.☆183Updated last month
- Since the emergence of chatGPT in 2022, the acceleration of Large Language Model has become increasingly important. Here is a list of pap…☆278Updated 8 months ago
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆326Updated last year
- Analyze the inference of Large Language Models (LLMs). Analyze aspects like computation, storage, transmission, and hardware roofline mod…☆571Updated last year
- nnScaler: Compiling DNN models for Parallel Training☆118Updated last month
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆271Updated 3 months ago
- A throughput-oriented high-performance serving framework for LLMs☆912Updated 2 weeks ago
- High performance Transformer implementation in C++.☆140Updated 9 months ago
- ☆146Updated 10 months ago
- Materials for learning SGLang☆636Updated 2 weeks ago
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆433Updated this week
- ☆243Updated last year
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆286Updated 5 months ago
- A collection of memory efficient attention operators implemented in the Triton language.☆283Updated last year
- ☆101Updated last year
- A Easy-to-understand TensorOp Matmul Tutorial☆390Updated last month
- flash attention tutorial written in python, triton, cuda, cutlass☆443Updated 5 months ago
- ☆124Updated last year