☆84Apr 18, 2025Updated 11 months ago
Alternatives and similar repositories for chunk-attention
Users that are interested in chunk-attention are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆40Feb 29, 2024Updated 2 years ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆470May 30, 2025Updated 10 months ago
- [COLM 2024] TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decoding☆279Aug 31, 2024Updated last year
- ☆132Nov 11, 2024Updated last year
- ☆29Jun 22, 2025Updated 9 months ago
- NordVPN Threat Protection Pro™ • AdTake your cybersecurity to the next level. Block phishing, malware, trackers, and ads. Lightweight app that works with all browsers.
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆213Sep 21, 2024Updated last year
- ☆15Apr 11, 2024Updated 2 years ago
- ☆20Jun 9, 2025Updated 10 months ago
- ☆29Mar 24, 2025Updated last year
- Disaggregated serving system for Large Language Models (LLMs).☆798Apr 6, 2025Updated last year
- 📰 Must-read papers on KV Cache Compression (constantly updating 🤗).☆679Feb 24, 2026Updated last month
- A low-latency & high-throughput serving engine for LLMs☆490Jan 8, 2026Updated 3 months ago
- Efficient and easy multi-instance LLM serving☆541Mar 12, 2026Updated 3 weeks ago
- High Performance Int8 GEMM Kernels for SM80 and later GPUs.☆21Mar 11, 2025Updated last year
- End-to-end encrypted cloud storage - Proton Drive • AdSpecial offer: 40% Off Yearly / 80% Off First Month. Protect your most important files, photos, and documents from prying eyes.
- ☆156Oct 9, 2024Updated last year
- 16-fold memory access reduction with nearly no loss☆108Mar 26, 2025Updated last year
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆535Feb 10, 2025Updated last year
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆63Mar 25, 2025Updated last year
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆145Dec 4, 2024Updated last year
- [ICLR 2025] PEARL: Parallel Speculative Decoding with Adaptive Draft Length☆155Dec 23, 2025Updated 3 months ago
- Official repository for the paper Local Linear Attention: An Optimal Interpolation of Linear and Softmax Attention For Test-Time Regressi…☆23Oct 1, 2025Updated 6 months ago
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆168Oct 13, 2025Updated 5 months ago
- a simple API to use CUPTI☆10Aug 19, 2025Updated 7 months ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Efficient LLM Inference over Long Sequences☆393Jun 25, 2025Updated 9 months ago
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆50Oct 18, 2024Updated last year
- ☆163Feb 15, 2025Updated last year
- [ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache☆381Nov 20, 2025Updated 4 months ago
- ☆19Dec 24, 2024Updated last year
- This repo contains the source code for: Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs☆44Aug 14, 2024Updated last year
- [NeurIPS 2024] The official implementation of "Kangaroo: Lossless Self-Speculative Decoding for Accelerating LLMs via Double Early Exitin…☆68Jun 26, 2024Updated last year
- [NeurIPS 2024] Fast Best-of-N Decoding via Speculative Rejection☆54Oct 29, 2024Updated last year
- ☆65Apr 26, 2025Updated 11 months ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- ☆20Sep 24, 2025Updated 6 months ago
- A throughput-oriented high-performance serving framework for LLMs☆953Mar 29, 2026Updated last week
- ☆172Jul 15, 2025Updated 8 months ago
- ☆15Jun 26, 2024Updated last year
- A ChatGPT(GPT-3.5) & GPT-4 Workload Trace to Optimize LLM Serving Systems☆250Mar 19, 2026Updated 3 weeks ago
- Beyond KV Caching: Shared Attention for Efficient LLMs☆20Jul 19, 2024Updated last year
- Go package implementing an indexable ordered multimap☆25Feb 3, 2019Updated 7 years ago