Bruce-Lee-LY / flash_attention_inference
Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.
☆29Updated 2 months ago
Related projects ⓘ
Alternatives and complementary repositories for flash_attention_inference
- ☆79Updated 2 months ago
- ☆140Updated 6 months ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆98Updated 2 months ago
- ☆138Updated 2 weeks ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆85Updated 8 months ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆52Updated 3 months ago
- ☆99Updated 8 months ago
- flash attention tutorial written in python, triton, cuda, cutlass☆202Updated 5 months ago
- ☆79Updated 8 months ago
- ☆79Updated last year
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆90Updated 4 months ago
- ☆167Updated 4 months ago
- A Easy-to-understand TensorOp Matmul Tutorial☆290Updated 2 months ago
- ☆57Updated 2 weeks ago
- Examples of CUDA implementations by Cutlass CuTe☆98Updated last week
- Several optimization methods of half-precision general matrix vector multiplication (HGEMV) using CUDA core.☆49Updated 2 months ago
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆87Updated last month
- ☆32Updated last month
- TiledCUDA is a highly efficient kernel template library designed to elevate CUDA C’s level of abstraction for processing tiles.☆154Updated this week
- play gemm with tvm☆84Updated last year
- llama INT4 cuda inference with AWQ☆48Updated 4 months ago
- ☆123Updated 2 weeks ago
- A collection of memory efficient attention operators implemented in the Triton language.☆219Updated 5 months ago
- An easy-to-use package for implementing SmoothQuant for LLMs☆83Updated 6 months ago
- Benchmark code for the "Online normalizer calculation for softmax" paper☆59Updated 6 years ago
- Transformer related optimization, including BERT, GPT☆60Updated last year
- ☆48Updated this week
- Dynamic Memory Management for Serving LLMs without PagedAttention☆238Updated last week
- 使用 cutlass 实现 flash-attention 精简版,具有教学意义☆32Updated 3 months ago
- A fast communication-overlapping library for tensor parallelism on GPUs.☆224Updated 3 weeks ago