66RING / tiny-flash-attention
flash attention tutorial written in python, triton, cuda, cutlass
☆202Updated 5 months ago
Related projects ⓘ
Alternatives and complementary repositories for tiny-flash-attention
- A Easy-to-understand TensorOp Matmul Tutorial☆290Updated 2 months ago
- Examples of CUDA implementations by Cutlass CuTe☆98Updated last week
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆29Updated 2 months ago
- ☆167Updated 4 months ago
- A collection of memory efficient attention operators implemented in the Triton language.☆219Updated 5 months ago
- A fast communication-overlapping library for tensor parallelism on GPUs.☆224Updated 3 weeks ago
- ☆79Updated 2 months ago
- ☆79Updated 8 months ago
- learning how CUDA works☆169Updated 3 months ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆238Updated last week
- Puzzles for learning Triton, play it with minimal environment configuration!☆121Updated last week
- Several optimization methods of half-precision general matrix multiplication (HGEMM) using tensor core with WMMA API and MMA PTX instruct…☆302Updated 2 months ago
- FlagGems is an operator library for large language models implemented in Triton Language.☆342Updated this week
- ☆99Updated 8 months ago
- TiledCUDA is a highly efficient kernel template library designed to elevate CUDA C’s level of abstraction for processing tiles.☆154Updated this week
- A simple high performance CUDA GEMM implementation.☆335Updated 10 months ago
- ☆138Updated 2 weeks ago
- ☆140Updated 6 months ago
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆87Updated last month
- ☆79Updated last year
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆52Updated 3 months ago
- Step-by-step optimization of CUDA SGEMM☆240Updated 2 years ago
- Summary of some awesome work for optimizing LLM inference☆37Updated 2 weeks ago
- ☆189Updated 2 months ago
- ☆143Updated last year
- ☆57Updated 2 weeks ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆90Updated 4 months ago
- Since the emergence of chatGPT in 2022, the acceleration of Large Language Model has become increasingly important. Here is a list of pap…☆175Updated 2 weeks ago
- Several optimization methods of half-precision general matrix vector multiplication (HGEMV) using CUDA core.☆49Updated 2 months ago
- High performance Transformer implementation in C++.☆82Updated 2 months ago