xxyux / SpInferLinks
SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs
☆59Updated 7 months ago
Alternatives and similar repositories for SpInfer
Users that are interested in SpInfer are comparing it to the libraries listed below
Sorting:
- [HPCA 2025] A GPU-optimized system for efficient long-context LLMs decoding with low-bit KV cache.☆62Updated last week
- ☆83Updated 9 months ago
- Implement Flash Attention using Cute.☆96Updated 11 months ago
- ☆159Updated last year
- ☆65Updated 6 months ago
- ☆36Updated 3 weeks ago
- ☆19Updated last year
- [DAC'25] Official implement of "HybriMoE: Hybrid CPU-GPU Scheduling and Cache Management for Efficient MoE Inference"☆89Updated 5 months ago
- ☆57Updated last year
- LLM Inference with Microscaling Format☆32Updated last year
- Tile-based language built for AI computation across all scales☆80Updated last week
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆143Updated 2 months ago
- DeeperGEMM: crazy optimized version☆73Updated 6 months ago
- Artifacts of EVT ASPLOS'24☆28Updated last year
- Tile-Based Runtime for Ultra-Low-Latency LLM Inference☆178Updated this week
- Multi-Level Triton Runner supporting Python, IR, PTX, and cubin.☆76Updated last week
- [COLM 2024] SKVQ: Sliding-window Key and Value Cache Quantization for Large Language Models☆24Updated last year
- ArkVale: Efficient Generative LLM Inference with Recallable Key-Value Eviction (NIPS'24)☆44Updated 11 months ago
- ☆58Updated last year
- A Vectorized N:M Format for Unleashing the Power of Sparse Tensor Cores☆54Updated last year
- A lightweight design for computation-communication overlap.☆187Updated last month
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆78Updated last year
- Quantized Attention on GPU☆44Updated last year
- A practical way of learning Swizzle☆33Updated 9 months ago
- ☆111Updated 6 months ago
- ☆80Updated last year
- ☆102Updated last year
- DLSlime: Flexible & Efficient Heterogeneous Transfer Toolkit☆82Updated this week
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆223Updated 2 years ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆96Updated 2 months ago