ssiu / flash-attention-turingLinks
☆70Updated this week
Alternatives and similar repositories for flash-attention-turing
Users that are interested in flash-attention-turing are comparing it to the libraries listed below
Sorting:
- a simple Flash Attention v2 implementation with ROCM (RDNA3 GPU, roc wmma), mainly used for stable diffusion(ComfyUI) in Windows ZLUDA en…☆51Updated last year
- Fast and memory-efficient exact attention☆214Updated last week
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆276Updated 6 months ago
- A safetensors extension to efficiently store sparse quantized tensors on disk☆244Updated this week
- ☆206Updated 9 months ago
- ☆172Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆114Updated this week
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆154Updated 5 months ago
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆274Updated 6 months ago
- CPM.cu is a lightweight, high-performance CUDA implementation for LLMs, optimized for end-device inference and featuring cutting-edge tec…☆226Updated 3 weeks ago
- ☆437Updated 4 months ago
- Ahead of Time (AOT) Triton Math Library☆88Updated 2 weeks ago
- 🎯An accuracy-first, highly efficient quantization toolkit for LLMs, designed to minimize quality degradation across Weight-Only Quantiza…☆845Updated this week
- NVIDIA Linux open GPU with P2P support☆129Updated this week
- ☆105Updated last year
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆751Updated 6 months ago
- Fast low-bit matmul kernels in Triton☆427Updated last week
- High-speed and easy-use LLM serving framework for local deployment☆146Updated 6 months ago
- ☆130Updated last year
- ☆96Updated 10 months ago
- ☆163Updated 7 months ago
- ☆61Updated 6 months ago
- Fast and memory-efficient exact attention☆114Updated this week
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆44Updated 11 months ago
- OpenAI Triton backend for Intel® GPUs☆226Updated this week
- A general 2-8 bits quantization toolbox with GPTQ/AWQ/HQQ/VPTQ, and export to onnx/onnx-runtime easily.☆184Updated 10 months ago
- Development repository for the Triton language and compiler☆140Updated this week
- Standalone Flash Attention v2 kernel without libtorch dependency☆114Updated last year
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆127Updated last year
- DFloat11 [NeurIPS '25]: Lossless Compression of LLMs and DiTs for Efficient GPU Inference☆603Updated 2 months ago