rayleizhu / vllm-raView external linksLinks
[ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts
☆40Feb 29, 2024Updated last year
Alternatives and similar repositories for vllm-ra
Users that are interested in vllm-ra are comparing it to the libraries listed below
Sorting:
- ☆85Apr 18, 2025Updated 9 months ago
- ☆13Jan 7, 2025Updated last year
- ☆131Nov 11, 2024Updated last year
- LoRAFusion: Efficient LoRA Fine-Tuning for LLMs☆23Sep 23, 2025Updated 4 months ago
- ☆16Mar 3, 2024Updated last year
- A curated list of recent papers on efficient video attention for video diffusion models, including sparsification, quantization, and cach…☆56Oct 27, 2025Updated 3 months ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆142Dec 4, 2024Updated last year
- This is a fork of SGLang for hip-attention integration. Please refer to hip-attention for detail.☆18Dec 23, 2025Updated last month
- This is the official repo for the paper "Accelerating Parallel Sampling of Diffusion Models" Tang et al. ICML 2024 https://openreview.net…☆16Jul 19, 2024Updated last year
- ☆155Mar 4, 2025Updated 11 months ago
- ☆24Dec 11, 2024Updated last year
- ☆27Jan 7, 2025Updated last year
- Whisper in TensorRT-LLM☆17Sep 21, 2023Updated 2 years ago
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆78Mar 12, 2024Updated last year
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆524Feb 10, 2025Updated last year
- [ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache☆356Nov 20, 2025Updated 2 months ago
- An Open-Source RAG Workload Trace to Optimize RAG Serving Systems☆35Nov 18, 2025Updated 2 months ago
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆209Sep 21, 2024Updated last year
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆222Aug 19, 2024Updated last year
- KV cache compression for high-throughput LLM inference☆154Feb 5, 2025Updated last year
- The official implementation of the EMNLP 2023 paper LLM-FP4☆220Dec 15, 2023Updated 2 years ago
- Build pure WebAssembly from pre-trained DL model☆22Apr 30, 2020Updated 5 years ago
- A machine learning competition in Automated Deep Learning (AutoDL), co-organized by ChaLearn, Google and 4Paradigm. Accepted at NeurIPS 2…☆22Dec 10, 2020Updated 5 years ago
- ☢️ TensorRT 2023复赛——基于TensorRT-LLM的Llama模型推断加速优化☆51Oct 20, 2023Updated 2 years ago
- ☆96Dec 6, 2024Updated last year
- Artifact for "Apparate: Rethinking Early Exits to Tame Latency-Throughput Tensions in ML Serving" [SOSP '24]☆25Nov 21, 2024Updated last year
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆23Mar 15, 2024Updated last year
- Tile-based language built for AI computation across all scales☆123Updated this week
- A low-latency & high-throughput serving engine for LLMs☆474Jan 8, 2026Updated last month
- Prefix-Aware Attention for LLM Decoding☆27Jan 23, 2026Updated 3 weeks ago
- [NeurIPS 2024] Learning-to-Cache: Accelerating Diffusion Transformer via Layer Caching☆117Jul 15, 2024Updated last year
- Standalone Flash Attention v2 kernel without libtorch dependency☆114Sep 10, 2024Updated last year
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆176Jul 12, 2024Updated last year
- Nex Venus Communication Library☆72Nov 17, 2025Updated 3 months ago
- ☆30Jul 22, 2024Updated last year
- ReBase: Training Task Experts through Retrieval Based Distillation☆29Feb 5, 2025Updated last year
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆965Feb 5, 2026Updated last week
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference☆120Mar 6, 2024Updated last year
- (WIP) Parallel inference for black-forest-labs' FLUX model.☆18Nov 18, 2024Updated last year