INT-FlashAttention2024 / INT-FlashAttention
☆68Updated 3 months ago
Alternatives and similar repositories for INT-FlashAttention:
Users that are interested in INT-FlashAttention are comparing it to the libraries listed below
- PyTorch bindings for CUTLASS grouped GEMM.☆81Updated 5 months ago
- 16-fold memory access reduction with nearly no loss☆90Updated 3 weeks ago
- Quantized Attention on GPU☆45Updated 5 months ago
- ☆40Updated 9 months ago
- ☆55Updated last week
- A GPU-optimized system for efficient long-context LLMs decoding with low-bit KV cache.☆33Updated 3 weeks ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆106Updated 9 months ago
- Odysseus: Playground of LLM Sequence Parallelism☆68Updated 10 months ago
- ☆92Updated 7 months ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆111Updated this week
- Boosting 4-bit inference kernels with 2:4 Sparsity☆72Updated 7 months ago
- ☆69Updated last week
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆36Updated 3 weeks ago
- Official PyTorch implementation of "FlatQuant: Flatness Matters for LLM Quantization"☆120Updated 3 weeks ago
- DeeperGEMM: crazy optimized version☆67Updated 3 weeks ago
- Benchmark code for the "Online normalizer calculation for softmax" paper☆91Updated 6 years ago
- Code implementation of GPTQv2 (https://arxiv.org/abs/2504.02692)☆29Updated last week
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆42Updated last month
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆63Updated 8 months ago
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆94Updated last week
- llama INT4 cuda inference with AWQ☆54Updated 3 months ago
- LLM Inference with Microscaling Format☆20Updated 5 months ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆91Updated 3 weeks ago
- ☆60Updated this week
- Official implementation of the ICLR 2024 paper AffineQuant☆25Updated last year
- ☆67Updated this week
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆116Updated 2 weeks ago
- ☆66Updated 3 months ago
- ☆140Updated 9 months ago
- ☆103Updated 7 months ago