[ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts
☆40Feb 29, 2024Updated 2 years ago
Alternatives and similar repositories for vllm-ra
Users that are interested in vllm-ra are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆85Apr 18, 2025Updated 11 months ago
- ☆131Nov 11, 2024Updated last year
- Prefix-Aware Attention for LLM Decoding☆33Jan 23, 2026Updated 2 months ago
- This is a fork of SGLang for hip-attention integration. Please refer to hip-attention for detail.☆18Dec 23, 2025Updated 3 months ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆145Dec 4, 2024Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- This is the official repo for the paper "Accelerating Parallel Sampling of Diffusion Models" Tang et al. ICML 2024 https://openreview.net…☆16Jul 19, 2024Updated last year
- An Open-Source RAG Workload Trace to Optimize RAG Serving Systems☆36Nov 18, 2025Updated 4 months ago
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆78Mar 12, 2024Updated 2 years ago
- ☆155Mar 4, 2025Updated last year
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆211Sep 21, 2024Updated last year
- ☆96Dec 6, 2024Updated last year
- Code of "To The Point: Correspondence-driven monocular 3D category reconstruction (TTP)" Neurips 2021☆11Jan 24, 2022Updated 4 years ago
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆222Aug 19, 2024Updated last year
- [ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache☆363Nov 20, 2025Updated 4 months ago
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- KV cache compression for high-throughput LLM inference☆154Feb 5, 2025Updated last year
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆532Feb 10, 2025Updated last year
- ☆13Jan 7, 2025Updated last year
- ☆24Dec 11, 2024Updated last year
- The official implementation of the EMNLP 2023 paper LLM-FP4☆222Dec 15, 2023Updated 2 years ago
- LoRAFusion: Efficient LoRA Fine-Tuning for LLMs☆26Sep 23, 2025Updated 6 months ago
- ☆23Jul 22, 2025Updated 8 months ago
- Code implementation of GPTAQ (https://arxiv.org/abs/2504.02692)☆88Jul 28, 2025Updated 8 months ago
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆157Apr 7, 2025Updated 11 months ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- Artifact for "Apparate: Rethinking Early Exits to Tame Latency-Throughput Tensions in ML Serving" [SOSP '24]☆24Nov 21, 2024Updated last year
- A low-latency & high-throughput serving engine for LLMs☆486Jan 8, 2026Updated 2 months ago
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆180Jul 12, 2024Updated last year
- ☆65Apr 26, 2025Updated 11 months ago
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆119Mar 13, 2024Updated 2 years ago
- This the implementation of LeCo☆32Jan 20, 2025Updated last year
- FORA introduces simple yet effective caching mechanism in Diffusion Transformer Architecture for faster inference sampling.☆55Jul 8, 2024Updated last year
- ☆52Feb 19, 2024Updated 2 years ago
- PyTorch bindings for CUTLASS grouped GEMM.☆147May 29, 2025Updated 10 months ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- SR-VAE☆10Jul 26, 2021Updated 4 years ago
- A throughput-oriented high-performance serving framework for LLMs☆950Oct 29, 2025Updated 5 months ago
- Multi-Layer Key-Value sharing experiments on Pythia models☆34Jun 14, 2024Updated last year
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆167Oct 13, 2025Updated 5 months ago
- PyTorch code for our paper "Progressive Binarization with Semi-Structured Pruning for LLMs"☆13Mar 11, 2026Updated 2 weeks ago
- The repo for SHINE: A Scalable In-Context Hypernetwork for Mapping Context to LoRA in a Single Pass☆29Mar 21, 2026Updated last week
- Deep Variational Information Bottleneck (DVIB) in PyTorch.☆10Apr 25, 2020Updated 5 years ago