[ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts
☆40Feb 29, 2024Updated 2 years ago
Alternatives and similar repositories for vllm-ra
Users that are interested in vllm-ra are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆84Apr 18, 2025Updated last year
- ☆132Nov 11, 2024Updated last year
- Prefix-Aware Attention for LLM Decoding☆35Mar 31, 2026Updated 2 weeks ago
- This is a fork of SGLang for hip-attention integration. Please refer to hip-attention for detail.☆18Mar 31, 2026Updated 2 weeks ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆145Dec 4, 2024Updated last year
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- ☆16Mar 3, 2024Updated 2 years ago
- An Open-Source RAG Workload Trace to Optimize RAG Serving Systems☆36Nov 18, 2025Updated 5 months ago
- Injecting Adrenaline into LLM Serving: Boosting Resource Utilization and Throughput via Attention Disaggregation☆41Apr 13, 2026Updated last week
- ☆156Mar 4, 2025Updated last year
- ☆66Updated this week
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆213Sep 21, 2024Updated last year
- ☆96Dec 6, 2024Updated last year
- Code of "To The Point: Correspondence-driven monocular 3D category reconstruction (TTP)" Neurips 2021☆11Jan 24, 2022Updated 4 years ago
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆222Aug 19, 2024Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- [ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache☆384Nov 20, 2025Updated 4 months ago
- KV cache compression for high-throughput LLM inference☆156Feb 5, 2025Updated last year
- 中文金融大模型测评基准,六大类二十五 任务、等级化评价,国内模型获得A级☆10May 6, 2024Updated last year
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆536Feb 10, 2025Updated last year
- ☆13Jan 7, 2025Updated last year
- ☆11Apr 5, 2021Updated 5 years ago
- ☆24Dec 11, 2024Updated last year
- Awesome Quantization Paper lists with Codes☆10Feb 24, 2021Updated 5 years ago
- The official implementation of the EMNLP 2023 paper LLM-FP4☆222Dec 15, 2023Updated 2 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- A curated list of recent papers on efficient video attention for video diffusion models, including sparsification, quantization, and cach…☆60Oct 27, 2025Updated 5 months ago
- LoRAFusion: Efficient LoRA Fine-Tuning for LLMs☆26Apr 8, 2026Updated last week
- Source code for the paper: "Pantheon: Preemptible Multi-DNN Inference on Mobile Edge GPUs"☆16Apr 15, 2024Updated 2 years ago
- Code implementation of GPTAQ (https://arxiv.org/abs/2504.02692)☆88Jul 28, 2025Updated 8 months ago
- ☢️ TensorRT 2023复赛——基于TensorRT-LLM的Llama模型推断加速优化☆52Oct 20, 2023Updated 2 years ago
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆157Apr 7, 2025Updated last year
- Artifact for "Apparate: Rethinking Early Exits to Tame Latency-Throughput Tensions in ML Serving" [SOSP '24]☆24Nov 21, 2024Updated last year
- A low-latency & high-throughput serving engine for LLMs☆491Jan 8, 2026Updated 3 months ago
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆181Jul 12, 2024Updated last year
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- ☆67Apr 26, 2025Updated 11 months ago
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆118Mar 13, 2024Updated 2 years ago
- This the implementation of LeCo☆32Jan 20, 2025Updated last year
- FORA introduces simple yet effective caching mechanism in Diffusion Transformer Architecture for faster inference sampling.☆55Jul 8, 2024Updated last year
- ☆10Jun 28, 2019Updated 6 years ago
- PyTorch bindings for CUTLASS grouped GEMM.☆150May 29, 2025Updated 10 months ago
- ☆52Feb 19, 2024Updated 2 years ago