lessw2020 / triton_kernels_for_fun_and_profit
Custom kernels in Triton language for accelerating LLMs
☆17Updated 9 months ago
Alternatives and similar repositories for triton_kernels_for_fun_and_profit:
Users that are interested in triton_kernels_for_fun_and_profit are comparing it to the libraries listed below
- Write a fast kernel and run it on Discord. See how you compare against the best!☆17Updated this week
- Cataloging released Triton kernels.☆157Updated 3 weeks ago
- ☆171Updated last week
- extensible collectives library in triton☆77Updated 4 months ago
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark with Torch -> CUDA problems☆99Updated last week
- Learn CUDA with PyTorch☆16Updated this week
- ☆21Updated 3 months ago
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆116Updated last year
- ring-attention experiments☆119Updated 3 months ago
- ML/DL Math and Method notes☆58Updated last year
- SGEMM that beats cuBLAS☆68Updated last week
- Collection of kernels written in Triton language☆91Updated 3 months ago
- Fast low-bit matmul kernels in Triton☆199Updated last week
- Applied AI experiments and examples for PyTorch☆216Updated last week
- A place to store reusable transformer components of my own creation or found on the interwebs☆44Updated this week
- ☆52Updated 9 months ago
- ☆140Updated 11 months ago
- Experiment of using Tangent to autodiff triton☆74Updated last year
- Solve puzzles. Learn CUDA.☆61Updated last year
- ☆17Updated last year
- ☆75Updated 6 months ago
- Make triton easier☆44Updated 7 months ago
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆43Updated 6 months ago
- Google TPU optimizations for transformers models☆90Updated last week
- This repository contains the experimental PyTorch native float8 training UX☆219Updated 5 months ago
- NanoGPT (124M) quality in 2.67B tokens☆27Updated this week
- ☆85Updated 11 months ago
- LLM training in simple, raw C/CUDA☆91Updated 8 months ago
- ☆41Updated last year
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆67Updated 8 months ago