[ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts
☆40Feb 29, 2024Updated 2 years ago
Alternatives and similar repositories for vllm-ra
Users that are interested in vllm-ra are comparing it to the libraries listed below
Sorting:
- ☆85Apr 18, 2025Updated 10 months ago
- Prefix-Aware Attention for LLM Decoding☆29Jan 23, 2026Updated last month
- ☆11Apr 5, 2021Updated 4 years ago
- ☆131Nov 11, 2024Updated last year
- ☆16Mar 3, 2024Updated 2 years ago
- A curated list of recent papers on efficient video attention for video diffusion models, including sparsification, quantization, and cach…☆59Oct 27, 2025Updated 4 months ago
- LoRAFusion: Efficient LoRA Fine-Tuning for LLMs☆24Sep 23, 2025Updated 5 months ago
- This is a fork of SGLang for hip-attention integration. Please refer to hip-attention for detail.☆18Dec 23, 2025Updated 2 months ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆143Dec 4, 2024Updated last year
- This is the official repo for the paper "Accelerating Parallel Sampling of Diffusion Models" Tang et al. ICML 2024 https://openreview.net…☆16Jul 19, 2024Updated last year
- ☆155Mar 4, 2025Updated last year
- ☆27Jan 7, 2025Updated last year
- ☆24Dec 11, 2024Updated last year
- Whisper in TensorRT-LLM☆17Sep 21, 2023Updated 2 years ago
- ☆23Jul 22, 2025Updated 7 months ago
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆78Mar 12, 2024Updated last year
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆527Feb 10, 2025Updated last year
- [ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache☆358Nov 20, 2025Updated 3 months ago
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆210Sep 21, 2024Updated last year
- ☆21Mar 22, 2021Updated 4 years ago
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆222Aug 19, 2024Updated last year
- KV cache compression for high-throughput LLM inference☆154Feb 5, 2025Updated last year
- The official implementation of the EMNLP 2023 paper LLM-FP4☆222Dec 15, 2023Updated 2 years ago
- Build pure WebAssembly from pre-trained DL model☆22Apr 30, 2020Updated 5 years ago
- ☢️ TensorRT 2023复赛——基于TensorRT-LLM的Llama模型推断加速优化☆51Oct 20, 2023Updated 2 years ago
- FORA introduces simple yet effective caching mechanism in Diffusion Transformer Architecture for faster inference sampling.☆52Jul 8, 2024Updated last year
- ☆95Dec 6, 2024Updated last year
- ☆65Apr 26, 2025Updated 10 months ago
- A low-latency & high-throughput serving engine for LLMs☆482Jan 8, 2026Updated 2 months ago
- [NeurIPS 2024] Learning-to-Cache: Accelerating Diffusion Transformer via Layer Caching☆117Jul 15, 2024Updated last year
- Keyformer proposes KV Cache reduction through key tokens identification and without the need for fine-tuning☆57Mar 26, 2024Updated last year
- Code implementation of GPTAQ (https://arxiv.org/abs/2504.02692)☆83Jul 28, 2025Updated 7 months ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆114Sep 10, 2024Updated last year
- [ICML 2025 Spotlight] ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference☆283May 1, 2025Updated 10 months ago
- ReBase: Training Task Experts through Retrieval Based Distillation☆29Feb 5, 2025Updated last year
- ☆30Jul 22, 2024Updated last year
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆974Feb 5, 2026Updated last month
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆179Jul 12, 2024Updated last year
- Tile-based language built for AI computation across all scales☆138Feb 27, 2026Updated last week