INT-FlashAttention2024 / INT-FlashAttentionLinks
☆83Updated 9 months ago
Alternatives and similar repositories for INT-FlashAttention
Users that are interested in INT-FlashAttention are comparing it to the libraries listed below
Sorting:
- ☆60Updated last year
- LLM Inference with Microscaling Format☆32Updated last year
- [HPCA 2025] A GPU-optimized system for efficient long-context LLMs decoding with low-bit KV cache.☆62Updated last week
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆128Updated last week
- ☆65Updated 6 months ago
- ☆120Updated 3 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆127Updated 5 months ago
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆59Updated 7 months ago
- An algorithm for weight-activation quantization (W4A4, W4A8) of LLMs, supporting both static and dynamic quantization☆164Updated 2 weeks ago
- ☆80Updated last year
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆121Updated last year
- Quantized Attention on GPU☆44Updated 11 months ago
- 16-fold memory access reduction with nearly no loss☆106Updated 7 months ago
- DeeperGEMM: crazy optimized version☆73Updated 6 months ago
- [ICML 2025] Official PyTorch implementation of "FlatQuant: Flatness Matters for LLM Quantization"☆186Updated last month
- Fast Hadamard transform in CUDA, with a PyTorch interface☆257Updated 3 weeks ago
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆327Updated last year
- ☆37Updated last year
- Autonomous GPU Kernel Generation via Deep Agents☆123Updated this week
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference☆118Updated last year
- Implement Flash Attention using Cute.☆96Updated 11 months ago
- [ACL 2024] A novel QAT with Self-Distillation framework to enhance ultra low-bit LLMs.☆126Updated last year
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆169Updated last year
- ☆102Updated last year
- ☆106Updated 5 months ago
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆145Updated 2 months ago
- AFPQ code implementation☆24Updated 2 years ago
- llama INT4 cuda inference with AWQ☆55Updated 9 months ago
- Code implementation of GPTAQ (https://arxiv.org/abs/2504.02692)☆75Updated 3 months ago
- A Suite for Parallel Inference of Diffusion Transformers (DiTs) on multi-GPU Clusters☆51Updated last year