weishengying / tiny-flash-attentionLinks
使用 cutlass 实现 flash-attention 精简版,具有教学意义
☆50Updated last year
Alternatives and similar repositories for tiny-flash-attention
Users that are interested in tiny-flash-attention are comparing it to the libraries listed below
Sorting:
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆76Updated last year
- ☆18Updated last year
- ☆107Updated 5 months ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆41Updated 7 months ago
- ☆150Updated 9 months ago
- ☆44Updated last year
- Several optimization methods of half-precision general matrix vector multiplication (HGEMV) using CUDA core.☆67Updated last year
- ☆137Updated 10 months ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆94Updated last month
- 使用 CUDA C++ 实现的 llama 模型推理框架☆62Updated 11 months ago
- Benchmark code for the "Online normalizer calculation for softmax" paper☆102Updated 7 years ago
- ☆100Updated last year
- ☆56Updated 3 months ago
- Optimize softmax in triton in many cases☆21Updated last year
- Examples of CUDA implementations by Cutlass CuTe