Implement Flash Attention using Cute.
☆103Dec 17, 2024Updated last year
Alternatives and similar repositories for cute-flash-attention
Users that are interested in cute-flash-attention are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- ☆119May 16, 2025Updated 10 months ago
- Examples of CUDA implementations by Cutlass CuTe☆271Jul 1, 2025Updated 8 months ago
- ☆169Feb 5, 2026Updated last month
- Flash Attention in ~100 lines of CUDA (forward pass only)☆11Jun 10, 2024Updated last year
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- flash attention tutorial written in python, triton, cuda, cutlass☆494Jan 20, 2026Updated 2 months ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆82Aug 12, 2024Updated last year
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆150May 10, 2025Updated 10 months ago
- ☆261Jul 11, 2024Updated last year
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆193Jan 28, 2025Updated last year
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆44Feb 27, 2025Updated last year
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆46Jun 11, 2025Updated 9 months ago
- ☆63Feb 15, 2026Updated last month
- Flash Attention in ~100 lines of CUDA (forward pass only)☆1,098Dec 30, 2024Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- study of cutlass☆22Nov 10, 2024Updated last year
- ☆20Sep 28, 2024Updated last year
- Standalone Flash Attention v2 kernel without libtorch dependency☆113Sep 10, 2024Updated last year
- Persistent dense gemm for Hopper in `CuTeDSL`☆15Aug 9, 2025Updated 7 months ago
- ☆184May 7, 2025Updated 10 months ago
- ☆44Nov 1, 2025Updated 4 months ago
- ☆38Aug 7, 2025Updated 7 months ago
- DeeperGEMM: crazy optimized version☆75May 5, 2025Updated 10 months ago
- ☆43Oct 15, 2025Updated 5 months ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- ☆52May 19, 2025Updated 10 months ago
- CUTLASS and CuTe Examples☆134Nov 30, 2025Updated 3 months ago
- ☆65Apr 26, 2025Updated 11 months ago
- ☆119May 19, 2025Updated 10 months ago
- Several optimization methods of half-precision general matrix multiplication (HGEMM) using tensor core with WMMA API and MMA PTX instruct…☆533Sep 8, 2024Updated last year
- Code for the paper: https://arxiv.org/pdf/2309.06979.pdf☆21Jul 29, 2024Updated last year
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆51Jul 4, 2025Updated 8 months ago
- Framework to reduce autotune overhead to zero for well known deployments.☆98Sep 19, 2025Updated 6 months ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆16Aug 31, 2023Updated 2 years ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- PointPillars TensorRT version pretrained on MMDetection3d with WaymoOpenDataset☆22Aug 11, 2022Updated 3 years ago
- A Easy-to-understand TensorOp Matmul Tutorial☆422Mar 5, 2026Updated 3 weeks ago
- Fastest kernels written from scratch☆561Sep 18, 2025Updated 6 months ago
- ☆109Mar 12, 2026Updated 2 weeks ago
- Effective transpose on Hopper GPU☆28Sep 6, 2025Updated 6 months ago
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- ☆155Mar 4, 2025Updated last year