Dynamic Memory Management for Serving LLMs without PagedAttention
☆466May 30, 2025Updated 9 months ago
Alternatives and similar repositories for vattention
Users that are interested in vattention are comparing it to the libraries listed below
Sorting:
- A throughput-oriented high-performance serving framework for LLMs☆949Oct 29, 2025Updated 4 months ago
- A low-latency & high-throughput serving engine for LLMs☆484Jan 8, 2026Updated 2 months ago
- FlashInfer: Kernel Library for LLM Serving☆5,145Updated this week
- Efficient and easy multi-instance LLM serving☆532Mar 12, 2026Updated last week
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆210Sep 21, 2024Updated last year
- Disaggregated serving system for Large Language Models (LLMs).☆785Apr 6, 2025Updated 11 months ago
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,273Aug 28, 2025Updated 6 months ago
- GLake: optimizing GPU memory management and IO transmission.☆498Mar 24, 2025Updated 11 months ago
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆490Updated this week
- Distributed Compiler based on Triton for Parallel Systems☆1,386Mar 11, 2026Updated last week
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆4,921Mar 14, 2026Updated last week
- Since the emergence of chatGPT in 2022, the acceleration of Large Language Model has become increasingly important. Here is a list of pap…☆281Mar 6, 2025Updated last year
- ☆105Sep 9, 2024Updated last year
- Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Training☆1,864Mar 12, 2026Updated last week
- A large-scale simulation framework for LLM inference☆556Jul 25, 2025Updated 7 months ago
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆376Jul 10, 2025Updated 8 months ago
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆818Mar 6, 2025Updated last year
- High performance Transformer implementation in C++.☆153Jan 18, 2025Updated last year
- ☆131Nov 11, 2024Updated last year
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆752Aug 6, 2025Updated 7 months ago
- 📚A curated list of Awesome LLM/VLM Inference Papers with Codes: Flash-Attention, Paged-Attention, WINT8/4, Parallelism, etc.🎉☆5,062Updated this week
- A lightweight design for computation-communication overlap.☆225Jan 20, 2026Updated 2 months ago
- KV cache store for distributed LLM inference☆399Nov 13, 2025Updated 4 months ago
- A Easy-to-understand TensorOp Matmul Tutorial☆409Mar 5, 2026Updated 2 weeks ago
- ☆85Apr 18, 2025Updated 11 months ago
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆192Jan 28, 2025Updated last year
- ☆632Jan 14, 2026Updated 2 months ago
- ☆119May 19, 2025Updated 10 months ago
- ☆29Mar 24, 2025Updated 11 months ago
- TiledLower is a Dataflow Analysis and Codegen Framework written in Rust.☆13Nov 23, 2024Updated last year
- Mirage Persistent Kernel: Compiling LLMs into a MegaKernel☆2,159Updated this week
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆320Jun 10, 2025Updated 9 months ago
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆1,070Updated this week
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,196Mar 9, 2026Updated last week
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆1,041Sep 4, 2024Updated last year
- Tile primitives for speedy kernels☆3,232Updated this week
- Large Language Model (LLM) Systems Paper List☆1,872Updated this week
- NVIDIA Inference Xfer Library (NIXL)☆945Updated this week