INT-FlashAttention2024 / INT-FlashAttention
☆46Updated last month
Related projects ⓘ
Alternatives and complementary repositories for INT-FlashAttention
- PyTorch bindings for CUTLASS grouped GEMM.☆51Updated last week
- An algorithm for static activation quantization of LLMs☆67Updated this week
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆87Updated 3 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆51Updated 2 months ago
- Official PyTorch implementation of FlatQuant: Flatness Matters for LLM Quantization☆58Updated this week
- Odysseus: Playground of LLM Sequence Parallelism☆55Updated 4 months ago
- Simple and fast low-bit matmul kernels in CUDA / Triton☆137Updated this week
- llama INT4 cuda inference with AWQ☆47Updated 4 months ago
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆76Updated last month
- ☆156Updated last year
- Official implementation of the ICLR 2024 paper AffineQuant☆21Updated 7 months ago
- ☆79Updated 2 months ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆98Updated 2 months ago
- ☆55Updated 5 months ago
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆20Updated 7 months ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆85Updated 8 months ago
- Model Compression Toolbox for Large Language Models and Diffusion Models☆161Updated this week
- Code for Palu: Compressing KV-Cache with Low-Rank Projection☆54Updated this week
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference☆112Updated 8 months ago
- ☆95Updated last month
- GPTQ inference TVM kernel☆35Updated 6 months ago
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆195Updated last week
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆52Updated 3 months ago
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆146Updated 4 months ago
- ☆162Updated 4 months ago
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆38Updated 9 months ago
- An easy-to-use package for implementing SmoothQuant for LLMs☆82Updated 5 months ago
- Quantized Attention on GPU☆29Updated last week
- [ACL 2024] A novel QAT with Self-Distillation framework to enhance ultra low-bit LLMs.☆81Updated 5 months ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆196Updated 2 weeks ago