[HPCA 2026] A GPU-optimized system for efficient long-context LLMs decoding with low-bit KV cache.
☆81Dec 18, 2025Updated 3 months ago
Alternatives and similar repositories for BitDecoding
Users that are interested in BitDecoding are comparing it to the libraries listed below
Sorting:
- ☆20Sep 28, 2024Updated last year
- TensorRT-in-Action 是一个 GitHub 代码库,提供了使用 TensorRT 的代码示例,并有对应 Jupyter Notebook。☆14Jun 1, 2023Updated 2 years ago
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- Examples of CUDA implementations by Cutlass CuTe☆270Jul 1, 2025Updated 8 months ago
- Tutorials of Extending and importing TVM with CMAKE Include dependency.☆15Oct 11, 2024Updated last year
- [ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache☆359Nov 20, 2025Updated 3 months ago
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆62Mar 25, 2025Updated 11 months ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆149May 10, 2025Updated 10 months ago
- AFPQ code implementation☆23Nov 6, 2023Updated 2 years ago
- HALO: Hadamard-Assisted Low-Precision Optimization and Training method for finetuning LLMs. 🚀 The official implementation of https://arx…☆29Feb 17, 2025Updated last year
- ☆52May 19, 2025Updated 10 months ago
- 🎉My Collections of CUDA Kernels~☆10Jun 25, 2024Updated last year
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆192Jan 28, 2025Updated last year
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆80Aug 12, 2024Updated last year
- ☆39Dec 14, 2025Updated 3 months ago
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆21Updated this week
- ☆90May 31, 2025Updated 9 months ago
- [COLM 2024] SKVQ: Sliding-window Key and Value Cache Quantization for Large Language Models☆24Oct 5, 2024Updated last year
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoring☆272Jul 6, 2025Updated 8 months ago
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆19Mar 12, 2026Updated last week
- A CUDA kernel for NHWC GroupNorm for PyTorch☆22Nov 15, 2024Updated last year
- ☆32Jul 2, 2025Updated 8 months ago
- TiledKernel is a code generation library based on macro kernels and memory hierarchy graph data structure.☆19May 12, 2024Updated last year
- Awesome code, projects, books, etc. related to CUDA☆30Feb 3, 2026Updated last month
- Residual vector quantization for KV cache compression in large language model☆12Oct 22, 2024Updated last year
- TiledLower is a Dataflow Analysis and Codegen Framework written in Rust.☆13Nov 23, 2024Updated last year
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆106Jun 28, 2025Updated 8 months ago
- ☆46Jun 19, 2024Updated last year
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆43Feb 27, 2025Updated last year
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆170Nov 11, 2025Updated 4 months ago
- flash attention tutorial written in python, triton, cuda, cutlass☆491Jan 20, 2026Updated 2 months ago
- Persistent dense gemm for Hopper in `CuTeDSL`☆15Aug 9, 2025Updated 7 months ago
- ☆261Jul 11, 2024Updated last year
- ☆158Dec 26, 2024Updated last year
- ☆119May 19, 2025Updated 10 months ago
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆19Aug 3, 2025Updated 7 months ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆95Feb 20, 2026Updated 3 weeks ago
- List of papers related to Vision Transformers quantization and hardware acceleration in recent AI conferences and journals.☆103Jun 2, 2024Updated last year