Collection of kernels written in Triton language
☆181Jan 27, 2026Updated last month
Alternatives and similar repositories for Awesome-Triton-Kernels
Users that are interested in Awesome-Triton-Kernels are comparing it to the libraries listed below
Sorting:
- Cataloging released Triton kernels.☆296Sep 9, 2025Updated 6 months ago
- Fast low-bit matmul kernels in Triton☆436Feb 1, 2026Updated last month
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆595Aug 12, 2025Updated 6 months ago
- ☆105Nov 7, 2024Updated last year
- A collection of memory efficient attention operators implemented in the Triton language.☆288Jun 5, 2024Updated last year
- PyTorch bindings for CUTLASS grouped GEMM.☆144May 29, 2025Updated 9 months ago
- Shared Middle-Layer for Triton Compilation☆331Dec 5, 2025Updated 3 months ago
- ☆301Updated this week
- Awesome Triton Resources☆39Apr 27, 2025Updated 10 months ago
- Framework to reduce autotune overhead to zero for well known deployments.☆97Sep 19, 2025Updated 5 months ago
- A Triton-only attention backend for vLLM☆24Feb 11, 2026Updated 3 weeks ago
- ring-attention experiments☆166Oct 17, 2024Updated last year
- FlagGems is an operator library for large language models implemented in the Triton Language.☆909Updated this week
- Puzzles for learning Triton☆2,324Nov 18, 2024Updated last year
- ☆44Nov 1, 2025Updated 4 months ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆327Mar 1, 2026Updated last week
- ☆52May 19, 2025Updated 9 months ago
- extensible collectives library in triton☆96Mar 31, 2025Updated 11 months ago
- ☆32Jul 2, 2025Updated 8 months ago
- Tile primitives for speedy kernels☆3,202Feb 24, 2026Updated last week
- ☆262Jul 11, 2024Updated last year
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆46Jun 11, 2025Updated 8 months ago
- An auxiliary project analysis of the characteristics of KV in DiT Attention.☆33Nov 29, 2024Updated last year
- ☆65Apr 26, 2025Updated 10 months ago
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark + Toolkit with Torch -> CUDA (+ more DSLs)☆836Updated this week
- Triton-based implementation of Sparse Mixture of Experts.☆268Oct 3, 2025Updated 5 months ago
- ☆14Mar 8, 2025Updated last year
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- Automated bottleneck detection and solution orchestration☆19Feb 24, 2026Updated last week
- ☆20Sep 28, 2024Updated last year
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆143Dec 4, 2024Updated last year
- Automatic differentiation for Triton Kernels☆29Aug 12, 2025Updated 6 months ago
- DeeperGEMM: crazy optimized version☆74May 5, 2025Updated 10 months ago
- ☆53Feb 24, 2026Updated last week
- Ship correct and fast LLM kernels to PyTorch☆144Jan 14, 2026Updated last month
- Tilus is a tile-level kernel programming language with explicit control over shared memory and registers.☆446Updated this week
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆107Jun 28, 2025Updated 8 months ago
- ☆115Aug 26, 2024Updated last year
- Distributed Compiler based on Triton for Parallel Systems☆1,371Feb 13, 2026Updated 3 weeks ago