Cataloging released Triton kernels.
☆302Sep 9, 2025Updated 7 months ago
Alternatives and similar repositories for triton-index
Users that are interested in triton-index are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Fast low-bit matmul kernels in Triton☆446Apr 27, 2026Updated last week
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆600Aug 12, 2025Updated 8 months ago
- Applied AI experiments and examples for PyTorch☆320Aug 22, 2025Updated 8 months ago
- ☆14Mar 8, 2025Updated last year
- Collection of kernels written in Triton language☆191Jan 27, 2026Updated 3 months ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Transformers components but in Triton☆34May 9, 2025Updated 11 months ago
- Puzzles for learning Triton☆2,421Apr 1, 2026Updated last month
- ☆325Updated this week
- ☆111Mar 12, 2026Updated last month
- GPU programming related news and material links☆2,120Mar 8, 2026Updated last month
- Framework to reduce autotune overhead to zero for well known deployments.☆99Sep 19, 2025Updated 7 months ago
- An ML Systems Onboarding list☆1,057Feb 19, 2026Updated 2 months ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆351Updated this week
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆108Jun 28, 2025Updated 10 months ago
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- FlagGems is an operator library for large language models implemented in the Triton Language.☆981Updated this week
- Tile primitives for speedy kernels☆3,336Apr 29, 2026Updated last week
- Material for gpu-mode lectures☆6,040Apr 22, 2026Updated last week
- A collection of memory efficient attention operators implemented in the Triton language.☆290Jun 5, 2024Updated last year
- Distributed Compiler based on Triton for Parallel Systems☆1,420Apr 22, 2026Updated 2 weeks ago
- flash attention tutorial written in python, triton, cuda, cutlass☆508Jan 20, 2026Updated 3 months ago
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆193Jan 28, 2025Updated last year
- Shared Middle-Layer for Triton Compilation☆331Dec 5, 2025Updated 5 months ago
- ☆265Jul 11, 2024Updated last year
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- A curated list of resources for learning and exploring Triton, OpenAI's programming language for writing efficient GPU code.☆481Mar 10, 2025Updated last year
- EquiTriton is a project that seeks to implement high-performance kernels for commonly used building blocks in equivariant neural networks…☆70Apr 27, 2026Updated last week
- FlexAttention w/ FlashAttention3 Support☆27Oct 5, 2024Updated last year
- A Python-embedded DSL that makes it easy to write fast, scalable ML kernels with minimal boilerplate.☆855Updated this week
- Experiment of using Tangent to autodiff triton☆82Jan 22, 2024Updated 2 years ago
- Make triton easier☆50Jun 12, 2024Updated last year
- PyTorch native quantization and sparsity for training and inference☆2,807Updated this week
- Automatic differentiation for Triton Kernels☆29Aug 12, 2025Updated 8 months ago
- extensible collectives library in triton☆98Mar 31, 2025Updated last year
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- PyTorch bindings for CUTLASS grouped GEMM.☆151May 29, 2025Updated 11 months ago
- Awesome Triton Resources☆40Apr 27, 2025Updated last year
- GPTQ inference Triton kernel☆321May 18, 2023Updated 2 years ago
- A lightweight design for computation-communication overlap.☆229Jan 20, 2026Updated 3 months ago
- FlashInfer: Kernel Library for LLM Serving☆5,544Updated this week
- Mirage Persistent Kernel: Compiling LLMs into a MegaKernel☆2,234Updated this week
- A simple high performance CUDA GEMM implementation.☆434Jan 4, 2024Updated 2 years ago