lessw2020 / triton_kernels_for_fun_and_profitLinks
Custom kernels in Triton language for accelerating LLMs
☆25Updated last year
Alternatives and similar repositories for triton_kernels_for_fun_and_profit
Users that are interested in triton_kernels_for_fun_and_profit are comparing it to the libraries listed below
Sorting:
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆136Updated last year
- Cataloging released Triton kernels.☆252Updated 7 months ago
- ☆232Updated last week
- Learn CUDA with PyTorch☆67Updated this week
- Write a fast kernel and run it on Discord. See how you compare against the best!☆51Updated this week
- Fast low-bit matmul kernels in Triton☆353Updated last week
- ring-attention experiments☆149Updated 10 months ago
- Applied AI experiments and examples for PyTorch☆292Updated last week
- Collection of kernels written in Triton language☆152Updated 4 months ago
- This repository contains the experimental PyTorch native float8 training UX☆224Updated last year
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆191Updated 2 months ago
- ☆163Updated last year
- a minimal cache manager for PagedAttention, on top of llama3.☆118Updated last year
- extensible collectives library in triton☆88Updated 5 months ago
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆214Updated 3 months ago
- High-Performance SGEMM on CUDA devices☆97Updated 7 months ago
- ☆49Updated 7 months ago
- ☆192Updated 7 months ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆207Updated last week
- An implementation of the transformer architecture onto an Nvidia CUDA kernel☆189Updated last year
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆208Updated last week
- A bunch of kernels that might make stuff slower 😉☆58Updated last week
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆571Updated 2 weeks ago
- ☆214Updated 6 months ago
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 5 months ago
- Learning about CUDA by writing PTX code.☆135Updated last year
- LLM training in simple, raw C/CUDA☆104Updated last year
- Official Problem Sets / Reference Kernels for the GPU MODE Leaderboard!☆74Updated this week
- making the official triton tutorials actually comprehensible☆53Updated this week
- Experiment of using Tangent to autodiff triton☆80Updated last year