Flash Attention in ~100 lines of CUDA (forward pass only)
☆1,098Dec 30, 2024Updated last year
Alternatives and similar repositories for flash-attention-minimal
Users that are interested in flash-attention-minimal are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- flash attention tutorial written in python, triton, cuda, cutlass☆494Jan 20, 2026Updated 2 months ago
- Implement Flash Attention using Cute.☆103Dec 17, 2024Updated last year
- FlashInfer: Kernel Library for LLM Serving☆5,194Mar 21, 2026Updated last week
- a minimal cache manager for PagedAttention, on top of llama3.☆139Aug 26, 2024Updated last year
- Tile primitives for speedy kernels☆3,244Mar 17, 2026Updated last week
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- Fastest kernels written from scratch☆561Sep 18, 2025Updated 6 months ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆82Aug 12, 2024Updated last year
- ☆261Jul 11, 2024Updated last year
- This is a series of GPU optimization topics. Here we will introduce how to optimize the CUDA kernel in detail. I will introduce several…☆1,259Jul 29, 2023Updated 2 years ago
- how to optimize some algorithm in cuda.☆2,887Updated this week
- 📚LeetCUDA: Modern CUDA Learn Notes with PyTorch for Beginners🐑, 200+ CUDA Kernels, Tensor Cores, HGEMM, FA-2 MMA.🎉☆10,022Updated this week
- Examples of CUDA implementations by Cutlass CuTe☆271Jul 1, 2025Updated 8 months ago
- Material for gpu-mode lectures☆5,865Feb 1, 2026Updated last month
- Step-by-step optimization of CUDA SGEMM☆448Mar 30, 2022Updated 3 years ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- Fast CUDA matrix multiplication from scratch☆1,110Sep 2, 2025Updated 6 months ago
- Flash Attention in 300-500 lines of CUDA/C++☆36Aug 22, 2025Updated 7 months ago
- A Easy-to-understand TensorOp Matmul Tutorial☆422Mar 5, 2026Updated 3 weeks ago
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆193Jan 28, 2025Updated last year
- 使用 cutlass 实现 flash-attention 精简版,具有教学意义☆59Aug 12, 2024Updated last year
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆44Feb 27, 2025Updated last year
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆46Jun 11, 2025Updated 9 months ago
- Several optimization methods of half-precision general matrix multiplication (HGEMM) using tensor core with WMMA API and MMA PTX instruct…☆533Sep 8, 2024Updated last year
- ☆119May 16, 2025Updated 10 months ago
- NordVPN Threat Protection Pro™ • AdTake your cybersecurity to the next level. Block phishing, malware, trackers, and ads. Lightweight app that works with all browsers.
- CUDA Templates and Python DSLs for High-Performance Linear Algebra☆9,484Mar 18, 2026Updated last week
- Flash Attention in raw Cuda C beating PyTorch☆38May 14, 2024Updated last year
- Fast and memory-efficient exact attention☆22,938Updated this week
- A throughput-oriented high-performance serving framework for LLMs☆950Oct 29, 2025Updated 5 months ago
- Ring attention implementation with flash attention☆998Sep 10, 2025Updated 6 months ago
- 📚A curated list of Awesome LLM/VLM Inference Papers with Codes: Flash-Attention, Paged-Attention, WINT8/4, Parallelism, etc.🎉☆5,082Updated this week
- Standalone Flash Attention v2 kernel without libtorch dependency☆113Sep 10, 2024Updated last year
- A simple high performance CUDA GEMM implementation.☆428Jan 4, 2024Updated 2 years ago
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels☆5,432Updated this week
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,692Updated this week
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆1,045Sep 4, 2024Updated last year
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- Optimizing SGEMM kernel functions on NVIDIA GPUs to a close-to-cuBLAS performance.☆409Jan 2, 2025Updated last year
- Efficient Triton Kernels for LLM Training☆6,242Updated this week
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆598Aug 12, 2025Updated 7 months ago
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year