SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs
☆64Mar 25, 2025Updated last year
Alternatives and similar repositories for SpInfer
Users that are interested in SpInfer are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- ☆33Apr 2, 2025Updated last year
- ☆167Jul 22, 2024Updated last year
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- ☆66Apr 26, 2025Updated last year
- Serverless GPU API endpoints on Runpod - Get Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- ☆46Jun 19, 2024Updated last year
- [NeurIPS 2023] ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer☆30Dec 6, 2023Updated 2 years ago
- ☆20Sep 28, 2024Updated last year
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆243Sep 24, 2023Updated 2 years ago
- ☆164Feb 15, 2025Updated last year
- DeeperGEMM: crazy optimized version☆86May 5, 2025Updated 11 months ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆96Feb 20, 2026Updated 2 months ago
- [HPCA 2026] A GPU-optimized system for efficient long-context LLMs decoding with low-bit KV cache.☆86Dec 18, 2025Updated 4 months ago
- A sparse attention kernel supporting mix sparse patterns☆503Jan 18, 2026Updated 3 months ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- ☆40Dec 14, 2025Updated 4 months ago
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆179Nov 11, 2025Updated 5 months ago
- An auxiliary project analysis of the characteristics of KV in DiT Attention.☆34Nov 29, 2024Updated last year
- Framework to reduce autotune overhead to zero for well known deployments.☆99Sep 19, 2025Updated 7 months ago
- ☆264Jul 11, 2024Updated last year
- Fast low-bit matmul kernels in Triton☆445Apr 24, 2026Updated last week
- A Vectorized N:M Format for Unleashing the Power of Sparse Tensor Cores☆60Nov 24, 2023Updated 2 years ago
- Tutorials of Extending and importing TVM with CMAKE Include dependency.☆16Oct 11, 2024Updated last year
- ☆98Mar 26, 2025Updated last year
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆834Mar 6, 2025Updated last year
- High-performance GEMM implementation optimized for NVIDIA H100 GPUs, leveraging Hopper architecture's TMA, WGMMA, and Thread Block Cluste…☆10Dec 4, 2024Updated last year
- [ICML24] Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMs☆99Nov 25, 2024Updated last year
- High Performance FP8 GEMM Kernels for SM89 and later GPUs.☆21Jan 24, 2025Updated last year
- ☆98May 31, 2025Updated 11 months ago
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆1,065Sep 4, 2024Updated last year
- ☆14Dec 5, 2024Updated last year
- ☆32Jul 2, 2025Updated 10 months ago
- This is a repository of Binary General Matrix Multiply (BGEMM) by customized CUDA kernel. Thank FP6-LLM for the wheels!☆20Aug 30, 2024Updated last year
- End-to-end encrypted email - Proton Mail • AdSpecial offer: 40% Off Yearly / 80% Off First Month. All Proton services are open source and independently audited for security.
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆762Aug 6, 2025Updated 8 months ago
- ☆159Dec 26, 2024Updated last year
- An extention of TVMScript to write simple and high performance GPU kernels with tensorcore.☆52Jul 23, 2024Updated last year
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆128Jul 13, 2024Updated last year
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoring☆277Jul 6, 2025Updated 9 months ago
- Triton to TVM transpiler.☆23Oct 14, 2024Updated last year
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆151May 10, 2025Updated 11 months ago