gpu-mode / reference-kernels
Reference Kernels for the Leaderboard
☆23Updated 3 weeks ago
Alternatives and similar repositories for reference-kernels:
Users that are interested in reference-kernels are comparing it to the libraries listed below
- LLM training in simple, raw C/CUDA☆92Updated 10 months ago
- High-Performance SGEMM on CUDA devices☆87Updated 2 months ago
- Write a fast kernel and run it on Discord. See how you compare against the best!☆35Updated this week
- extensible collectives library in triton☆84Updated 6 months ago
- ☆21Updated 3 weeks ago
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆127Updated last year
- ☆28Updated 2 months ago
- Explore training for quantized models☆17Updated 2 months ago
- Fast low-bit matmul kernels in Triton☆272Updated this week
- ☆192Updated this week
- Make triton easier☆47Updated 9 months ago
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆60Updated this week
- Experiment of using Tangent to autodiff triton☆78Updated last year
- Learning about CUDA by writing PTX code.☆125Updated last year
- Experimental GPU language with meta-programming☆22Updated 6 months ago
- Personal solutions to the Triton Puzzles☆18Updated 8 months ago
- Learn CUDA with PyTorch☆19Updated 2 months ago
- Collection of kernels written in Triton language☆114Updated last month
- FlexAttention w/ FlashAttention3 Support☆26Updated 5 months ago
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆45Updated 8 months ago
- Fastest kernels written from scratch☆202Updated 3 weeks ago
- Cataloging released Triton kernels.☆212Updated 2 months ago
- ☆73Updated 4 months ago
- Accelerated First Order Parallel Associative Scan☆177Updated 7 months ago
- An implementation of the transformer architecture onto an Nvidia CUDA kernel☆174Updated last year
- A bunch of kernels that might make stuff slower 😉☆29Updated this week
- A user-friendly tool chain that enables the seamless execution of ONNX models using JAX as the backend.☆109Updated last month
- ☆27Updated 2 months ago
- research impl of Native Sparse Attention (2502.11089)☆53Updated last month
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆40Updated last week