Repeerc / flash-attention-v2-RDNA3-minimalLinks
a simple Flash Attention v2 implementation with ROCM (RDNA3 GPU, roc wmma), mainly used for stable diffusion(ComfyUI) in Windows ZLUDA environments.
☆47Updated last year
Alternatives and similar repositories for flash-attention-v2-RDNA3-minimal
Users that are interested in flash-attention-v2-RDNA3-minimal are comparing it to the libraries listed below
Sorting:
- Fast and memory-efficient exact attention☆188Updated last week
- ☆76Updated 8 months ago
- Development repository for the Triton language and compiler☆130Updated last week
- Model Compression Toolbox for Large Language Models and Diffusion Models☆628Updated last month
- AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (N…☆12Updated last year
- ☆55Updated 2 months ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆112Updated 4 months ago
- AI Tensor Engine for ROCm☆276Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆101Updated this week
- A CUDA kernel for NHWC GroupNorm for PyTorch☆20Updated 10 months ago
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆92Updated last week
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆114Updated last year
- A safetensors extension to efficiently store sparse quantized tensors on disk☆157Updated last week
- ☆150Updated 2 months ago
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆139Updated 3 weeks ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆265Updated 2 months ago
- PyTorch half precision gemm lib w/ fused optional bias + optional relu/gelu☆73Updated 9 months ago
- Ahead of Time (AOT) Triton Math Library☆76Updated 2 weeks ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆110Updated last year
- ☆98Updated last year
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆75Updated last year
- Implement Flash Attention using Cute.☆95Updated 9 months ago
- Triton adapter for Ascend. Mirror of https://gitee.com/ascend/triton-ascend☆71Updated this week
- (WIP) Parallel inference for black-forest-labs' FLUX model.☆19Updated 10 months ago
- triton for dsa☆36Updated last week
- High performance inference engine for diffusion models☆91Updated 2 weeks ago
- [ICML 2024] BiLLM: Pushing the Limit of Post-Training Quantization for LLMs☆224Updated 8 months ago
- OpenAI Triton backend for Intel® GPUs☆207Updated this week
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆94Updated this week
- ☆103Updated 4 months ago