ShaYeBuHui01 / flash_attention_inference
Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.
☆14Updated last year
Related projects ⓘ
Alternatives and complementary repositories for flash_attention_inference
- CPU Memory Compiler and Parallel programing☆24Updated last week
- ☆32Updated 3 weeks ago
- ☆56Updated this week
- Several optimization methods of half-precision general matrix vector multiplication (HGEMV) using CUDA core.☆48Updated 2 months ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆98Updated 2 months ago
- Examples of CUDA implementations by Cutlass CuTe☆82Updated last week
- ☆22Updated 6 months ago
- play gemm with tvm☆84Updated last year
- CUDA PTX-ISA Document 中文翻译版☆25Updated 8 months ago
- This project is about convolution operator optimization on GPU, include GEMM based (Implicit GEMM) convolution.☆18Updated last month
- ☆108Updated 2 years ago
- ☆103Updated 6 months ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆85Updated 8 months ago
- ☆79Updated last year
- ☆78Updated 8 months ago
- ☆136Updated this week
- An extension library of WMMA API (Tensor Core API)☆82Updated 3 months ago
- 使用 cutlass 实现 flash-attention 精简版,具有教学意义☆31Updated 2 months ago
- TiledCUDA is a highly efficient kernel template library designed to elevate CUDA C’s level of abstraction for processing tiles.☆148Updated this week
- ☆50Updated 2 years ago
- Forward and backward Attention DNN operators implementationed by LibTorch, cuDNN, and Eigen.☆27Updated last year
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆51Updated 2 months ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆87Updated 3 months ago
- llama INT4 cuda inference with AWQ☆47Updated 4 months ago
- ☆164Updated this week
- An unofficial cuda assembler, for all generations of SASS, hopefully :)☆78Updated last year
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆26Updated 2 months ago
- ☆140Updated 6 months ago
- CUDA Matrix Multiplication Optimization☆139Updated 3 months ago
- A Easy-to-understand TensorOp Matmul Tutorial☆287Updated last month