gpu-mode / reference-kernelsLinks
Official Problem Sets / Reference Kernels for the GPU MODE Leaderboard!
☆177Updated last week
Alternatives and similar repositories for reference-kernels
Users that are interested in reference-kernels are comparing it to the libraries listed below
Sorting:
- ☆268Updated last week
- ☆127Updated 2 months ago
- Helpful kernel tutorials and examples for tile-based GPU programming☆501Updated this week
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆244Updated 7 months ago
- Cataloging released Triton kernels.☆280Updated 3 months ago
- Fastest kernels written from scratch☆501Updated 3 months ago
- Fast low-bit matmul kernels in Triton☆413Updated last week
- Learn CUDA with PyTorch☆138Updated last week
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆306Updated this week
- kernels, of the mega variety☆634Updated 3 months ago
- Efficient implementation of DeepSeek Ops (Blockwise FP8 GEMM, MoE, and MLA) for AMD Instinct MI300X☆73Updated last month
- A Quirky Assortment of CuTe Kernels☆724Updated last week
- AMD RAD's multi-GPU Triton-based framework for seamless multi-GPU programming☆143Updated this week
- Tilus is a tile-level kernel programming language with explicit control over shared memory and registers.☆433Updated 2 weeks ago
- A Python-embedded DSL that makes it easy to write fast, scalable ML kernels with minimal boilerplate.☆697Updated this week
- Collection of kernels written in Triton language☆173Updated 8 months ago
- High-Performance SGEMM on CUDA devices☆114Updated 11 months ago
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆153Updated 2 years ago
- Step by step implementation of a fast softmax kernel in CUDA☆58Updated 11 months ago
- ☆82Updated 3 weeks ago
- TritonParse: A Compiler Tracer, Visualizer, and Reproducer for Triton Kernels☆179Updated this week
- Evaluating Large Language Models for CUDA Code Generation ComputeEval is a framework designed to generate and evaluate CUDA code from Lar…☆87Updated last month
- Applied AI experiments and examples for PyTorch☆311Updated 4 months ago
- An experimental CPU backend for Triton☆167Updated last month
- NVIDIA NVSHMEM is a parallel programming interface for NVIDIA GPUs based on OpenSHMEM. NVSHMEM can significantly reduce multi-process com…☆425Updated last week
- Quantized LLM training in pure CUDA/C++.☆224Updated last week
- Ship correct and fast LLM kernels to PyTorch☆127Updated last week
- ring-attention experiments☆160Updated last year
- Write a fast kernel and run it on Discord. See how you compare against the best!☆64Updated last week
- extensible collectives library in triton☆91Updated 8 months ago