VikParuchuri / triton_tutorialLinks
Tutorials for Triton, a language for writing gpu kernels
☆24Updated last year
Alternatives and similar repositories for triton_tutorial
Users that are interested in triton_tutorial are comparing it to the libraries listed below
Sorting:
- ML/DL Math and Method notes☆61Updated last year
- Experiment of using Tangent to autodiff triton☆79Updated last year
- Write a fast kernel and run it on Discord. See how you compare against the best!☆46Updated this week
- Collection of autoregressive model implementation☆85Updated 2 months ago
- Implementations of attention with the softpick function, naive and FlashAttention-2☆79Updated last month
- Learn CUDA with PyTorch☆27Updated this week
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆133Updated last week
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆185Updated 3 weeks ago
- ☆78Updated 11 months ago
- Work in progress.☆69Updated 3 weeks ago
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 3 months ago
- VIT inference in triton because, why not?☆29Updated last year
- Tiled Flash Linear Attention library for fast and efficient mLSTM Kernels.☆57Updated this week
- Cataloging released Triton kernels.☆238Updated 5 months ago
- Explorations into the recently proposed Taylor Series Linear Attention☆99Updated 10 months ago
- Normalized Transformer (nGPT)☆184Updated 7 months ago
- Collection of kernels written in Triton language☆132Updated 2 months ago
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆77Updated 2 weeks ago
- Cold Compress is a hackable, lightweight, and open-source toolkit for creating and benchmarking cache compression methods built on top of…☆137Updated 10 months ago
- Custom kernels in Triton language for accelerating LLMs☆22Updated last year
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆189Updated last month
- A bunch of kernels that might make stuff slower 😉☆51Updated this week
- FlashRNN - Fast RNN Kernels with I/O Awareness☆91Updated 2 weeks ago
- Code for studying the super weight in LLM☆107Updated 6 months ago
- research impl of Native Sparse Attention (2502.11089)☆54Updated 4 months ago
- ☆98Updated 5 months ago
- ☆39Updated last month
- ☆159Updated last year
- Cray-LM unified training and inference stack.☆22Updated 4 months ago
- Mixed precision training from scratch with Tensors and CUDA☆24Updated last year