Quantized Attention on GPU
☆44Nov 22, 2024Updated last year
Alternatives and similar repositories for ChituAttention
Users that are interested in ChituAttention are comparing it to the libraries listed below
Sorting:
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- (WIP) Parallel inference for black-forest-labs' FLUX model.☆19Nov 18, 2024Updated last year
- Framework to reduce autotune overhead to zero for well known deployments.☆97Sep 19, 2025Updated 5 months ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆79Aug 12, 2024Updated last year
- ☆52May 19, 2025Updated 9 months ago
- ☆20Sep 28, 2024Updated last year
- An auxiliary project analysis of the characteristics of KV in DiT Attention.☆33Nov 29, 2024Updated last year
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆60Mar 25, 2025Updated 11 months ago
- Odysseus: Playground of LLM Sequence Parallelism☆79Jun 17, 2024Updated last year
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆147May 10, 2025Updated 9 months ago
- ☆65Apr 26, 2025Updated 10 months ago
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆194Jan 28, 2025Updated last year
- ☆116May 16, 2025Updated 9 months ago
- DeeperGEMM: crazy optimized version☆74May 5, 2025Updated 10 months ago
- A Suite for Parallel Inference of Diffusion Transformers (DiTs) on multi-GPU Clusters☆57Jul 23, 2024Updated last year
- A parallelism VAE avoids OOM for high resolution image generation☆85Aug 4, 2025Updated 7 months ago
- FlexAttention w/ FlashAttention3 Support☆27Oct 5, 2024Updated last year
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆46Jun 11, 2025Updated 8 months ago
- TiledKernel is a code generation library based on macro kernels and memory hierarchy graph data structure.☆19May 12, 2024Updated last year
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆527Feb 10, 2025Updated last year
- [WIP] Better (FP8) attention for Hopper☆32Feb 24, 2025Updated last year
- 🤖FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3x↑🎉 vs SDPA EA.☆251Feb 13, 2026Updated 3 weeks ago
- Kernel Library Wheel for SGLang☆16Updated this week
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆239Jun 15, 2025Updated 8 months ago
- ☆168Feb 5, 2026Updated last month
- Fast low-bit matmul kernels in Triton☆436Feb 1, 2026Updated last month
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆753Aug 6, 2025Updated 7 months ago
- ☆79Dec 27, 2024Updated last year
- ☆87Jan 23, 2025Updated last year
- https://wavespeed.ai/ Context parallel attention that accelerates DiT model inference with dynamic caching☆424Jul 5, 2025Updated 8 months ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆143Dec 4, 2024Updated last year
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆21Feb 9, 2026Updated 3 weeks ago
- A collection of memory efficient attention operators implemented in the Triton language.☆288Jun 5, 2024Updated last year
- An experimental communicating attention kernel based on DeepEP.☆35Jul 29, 2025Updated 7 months ago
- APPy (Annotated Parallelism for Python) enables users to annotate loops and tensor expressions in Python with compiler directives akin to…☆30Jan 28, 2026Updated last month
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoring☆269Jul 6, 2025Updated 8 months ago
- [ICLR'25] ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation☆151Mar 21, 2025Updated 11 months ago
- ☆262Jul 11, 2024Updated last year
- [ICML 2025 Spotlight] ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference☆283May 1, 2025Updated 10 months ago