feifeibear / ChituAttention
Quantized Attention on GPU
☆34Updated 2 months ago
Alternatives and similar repositories for ChituAttention:
Users that are interested in ChituAttention are comparing it to the libraries listed below
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆52Updated 2 weeks ago
- ☆19Updated 4 months ago
- Implement Flash Attention using Cute.☆69Updated 2 months ago
- ☆36Updated last month
- Decoding Attention is specially optimized for multi head attention (MHA) using CUDA core for the decoding stage of LLM inference.☆29Updated 3 months ago
- An auxiliary project analysis of the characteristics of KV in DiT Attention.☆25Updated 2 months ago
- Odysseus: Playground of LLM Sequence Parallelism☆64Updated 8 months ago
- TileFusion is a highly efficient kernel template library designed to elevate the level of abstraction in CUDA C for processing tiles.☆56Updated this week
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆53Updated 6 months ago
- GPTQ inference TVM kernel☆38Updated 9 months ago
- ☆81Updated 5 months ago
- ☆61Updated 3 weeks ago
- A Suite for Parallel Inference of Diffusion Transformers (DiTs) on multi-GPU Clusters☆39Updated 6 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆64Updated 3 months ago
- 16-fold memory access reduction with nearly no loss☆76Updated 3 months ago
- ☆68Updated this week
- Framework to reduce autotune overhead to zero for well known deployments.☆61Updated 3 weeks ago
- ☆62Updated 2 months ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆89Updated this week
- Tutorials of Extending and importing TVM with CMAKE Include dependency.☆13Updated 4 months ago
- A sparse attention kernel supporting mix sparse patterns☆133Updated last week
- Transformers components but in Triton☆31Updated 3 months ago
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆16Updated 4 months ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆88Updated 11 months ago
- Estimate MFU for DeepSeekV3☆16Updated last month
- Summary of system papers/frameworks/codes/tools on training or serving large model☆56Updated last year
- Standalone Flash Attention v2 kernel without libtorch dependency☆104Updated 5 months ago
- ☆52Updated 10 months ago
- Benchmark tests supporting the TiledCUDA library.☆15Updated 3 months ago