HPMLL / SpInfer_EuroSys25Links
☆29Updated 8 months ago
Alternatives and similar repositories for SpInfer_EuroSys25
Users that are interested in SpInfer_EuroSys25 are comparing it to the libraries listed below
Sorting:
- [HPCA 2026] A GPU-optimized system for efficient long-context LLMs decoding with low-bit KV cache.☆63Updated 2 weeks ago
- LLM serving cluster simulator☆122Updated last year
- ☆57Updated last year
- ☆23Updated last year
- ☆124Updated last year
- ☆139Updated 2 weeks ago
- ☆15Updated last year
- Summary of some awesome work for optimizing LLM inference☆139Updated this week
- ☆41Updated last year
- ☆27Updated 8 months ago
- Magicube is a high-performance library for quantized sparse matrix operations (SpMM and SDDMM) of deep learning on Tensor Cores.☆90Updated 3 years ago
- InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management (OSDI'24)☆163Updated last year
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆59Updated 8 months ago
- ☆161Updated last year
- ArkVale: Efficient Generative LLM Inference with Recallable Key-Value Eviction (NIPS'24)☆47Updated 11 months ago
- FlashSparse significantly reduces the computation redundancy for unstructured sparsity (for SpMM and SDDMM) on Tensor Cores through a Swa…☆35Updated 2 months ago
- A lightweight design for computation-communication overlap.☆190Updated last month
- A ChatGPT(GPT-3.5) & GPT-4 Workload Trace to Optimize LLM Serving Systems☆220Updated 4 months ago
- ☆32Updated 3 years ago
- ☆79Updated last month
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆224Updated 2 years ago
- ☆38Updated last month
- Compiler for Dynamic Neural Networks☆46Updated 2 years ago
- Since the emergence of chatGPT in 2022, the acceleration of Large Language Model has become increasingly important. Here is a list of pap…☆282Updated 9 months ago
- ☆16Updated 3 years ago
- Repo for SpecEE: Accelerating Large Language Model Inference with Speculative Early Exiting (ISCA25)☆67Updated 7 months ago
- [NeurIPS 2025] ClusterFusion: Expanding Operator Fusion Scope for LLM Inference via Cluster-Level Collective Primitive☆50Updated 2 months ago
- A Vectorized N:M Format for Unleashing the Power of Sparse Tensor Cores☆55Updated 2 years ago
- High performance Transformer implementation in C++.☆142Updated 10 months ago
- ☆16Updated 9 months ago