☆85Apr 18, 2025Updated 11 months ago
Alternatives and similar repositories for chunk-attention
Users that are interested in chunk-attention are comparing it to the libraries listed below
Sorting:
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆40Feb 29, 2024Updated 2 years ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆466May 30, 2025Updated 9 months ago
- [COLM 2024] TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decoding☆277Aug 31, 2024Updated last year
- ☆131Nov 11, 2024Updated last year
- ☆29Jun 22, 2025Updated 9 months ago
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆210Sep 21, 2024Updated last year
- ☆15Apr 11, 2024Updated last year
- ☆20Jun 9, 2025Updated 9 months ago
- ☆29Mar 24, 2025Updated 11 months ago
- Disaggregated serving system for Large Language Models (LLMs).☆785Apr 6, 2025Updated 11 months ago
- 📰 Must-read papers on KV Cache Compression (constantly updating 🤗).☆674Feb 24, 2026Updated 3 weeks ago
- A low-latency & high-throughput serving engine for LLMs☆484Jan 8, 2026Updated 2 months ago
- Efficient and easy multi-instance LLM serving☆532Mar 12, 2026Updated last week
- High Performance Int8 GEMM Kernels for SM80 and later GPUs.☆19Mar 11, 2025Updated last year
- ☆152Oct 9, 2024Updated last year
- 16-fold memory access reduction with nearly no loss☆108Mar 26, 2025Updated 11 months ago
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆531Feb 10, 2025Updated last year
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆62Mar 25, 2025Updated 11 months ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆144Dec 4, 2024Updated last year
- [ICLR 2025] PEARL: Parallel Speculative Decoding with Adaptive Draft Length☆148Dec 23, 2025Updated 2 months ago
- Official repository for the paper Local Linear Attention: An Optimal Interpolation of Linear and Softmax Attention For Test-Time Regressi…☆23Oct 1, 2025Updated 5 months ago
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆164Oct 13, 2025Updated 5 months ago
- a simple API to use CUPTI☆10Aug 19, 2025Updated 7 months ago
- Efficient LLM Inference over Long Sequences☆393Jun 25, 2025Updated 8 months ago
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆51Oct 18, 2024Updated last year
- [ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache☆359Nov 20, 2025Updated 4 months ago
- ☆162Feb 15, 2025Updated last year
- ☆19Dec 24, 2024Updated last year
- This repo contains the source code for: Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs☆43Aug 14, 2024Updated last year
- [NeurIPS 2024] The official implementation of "Kangaroo: Lossless Self-Speculative Decoding for Accelerating LLMs via Double Early Exitin…☆68Jun 26, 2024Updated last year
- ☆65Apr 26, 2025Updated 10 months ago
- [NeurIPS 2024] Fast Best-of-N Decoding via Speculative Rejection☆55Oct 29, 2024Updated last year
- ☆20Sep 24, 2025Updated 5 months ago
- A throughput-oriented high-performance serving framework for LLMs☆949Oct 29, 2025Updated 4 months ago
- ☆169Jul 15, 2025Updated 8 months ago
- A ChatGPT(GPT-3.5) & GPT-4 Workload Trace to Optimize LLM Serving Systems☆243Updated this week
- ☆15Jun 26, 2024Updated last year
- Beyond KV Caching: Shared Attention for Efficient LLMs☆20Jul 19, 2024Updated last year
- Go package implementing an indexable ordered multimap☆25Feb 3, 2019Updated 7 years ago