SzymonOzog / FastSoftmaxLinks
Step by step implementation of a fast softmax kernel in CUDA
☆52Updated 9 months ago
Alternatives and similar repositories for FastSoftmax
Users that are interested in FastSoftmax are comparing it to the libraries listed below
Sorting:
- Cataloging released Triton kernels.☆261Updated last month
- Official Problem Sets / Reference Kernels for the GPU MODE Leaderboard!☆99Updated this week
- ☆242Updated last week
- Fast low-bit matmul kernels in Triton☆379Updated 2 weeks ago
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆114Updated 2 weeks ago
- ☆120Updated 7 months ago
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆232Updated 5 months ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆258Updated this week
- High-Performance SGEMM on CUDA devices☆107Updated 8 months ago
- Applied AI experiments and examples for PyTorch☆299Updated last month
- CUDA Matrix Multiplication Optimization☆228Updated last year
- Collection of kernels written in Triton language☆156Updated 6 months ago
- Fastest kernels written from scratch☆374Updated 3 weeks ago
- TritonParse: A Compiler Tracer, Visualizer, and Reproducer for Triton Kernels☆155Updated last week
- extensible collectives library in triton☆89Updated 6 months ago
- ring-attention experiments☆153Updated last year
- ☆79Updated 3 weeks ago
- kernels, of the mega variety☆586Updated 2 weeks ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆116Updated last year
- ☆240Updated last year
- A Quirky Assortment of CuTe Kernels☆627Updated this week
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆144Updated last year
- ☆92Updated 11 months ago
- Learn CUDA with PyTorch☆87Updated 3 weeks ago
- Custom kernels in Triton language for accelerating LLMs☆26Updated last year
- Learning about CUDA by writing PTX code.☆143Updated last year
- a minimal cache manager for PagedAttention, on top of llama3.☆123Updated last year
- A bunch of kernels that might make stuff slower 😉☆61Updated this week
- AMD RAD's multi-GPU Triton-based framework for seamless multi-GPU programming☆83Updated this week
- Quantized LLM training in pure CUDA/C++.☆198Updated this week