SzymonOzog / FastSoftmaxLinks
Step by step implementation of a fast softmax kernel in CUDA
☆60Updated last year
Alternatives and similar repositories for FastSoftmax
Users that are interested in FastSoftmax are comparing it to the libraries listed below
Sorting:
- ☆277Updated this week
- Official Problem Sets / Reference Kernels for the GPU MODE Leaderboard!☆194Updated last week
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆250Updated 8 months ago
- Fast low-bit matmul kernels in Triton☆424Updated this week
- Cataloging released Triton kernels.☆291Updated 4 months ago
- CUDA Matrix Multiplication Optimization☆252Updated last year
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆319Updated this week
- Learn CUDA with PyTorch☆185Updated this week
- ☆128Updated 3 months ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆127Updated last year
- High-Performance FP32 GEMM on CUDA devices☆117Updated last year
- Fastest kernels written from scratch☆528Updated 4 months ago
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆155Updated 2 years ago
- kernels, of the mega variety☆657Updated 4 months ago
- Applied AI experiments and examples for PyTorch☆314Updated 5 months ago
- ☆102Updated last year
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆163Updated 2 months ago
- Learning about CUDA by writing PTX code.☆151Updated last year
- ☆258Updated last year
- ☆89Updated 2 months ago
- Helpful kernel tutorials and examples for tile-based GPU programming☆617Updated this week
- Several optimization methods of half-precision general matrix vector multiplication (HGEMV) using CUDA core.☆71Updated last year
- TritonParse: A Compiler Tracer, Visualizer, and Reproducer for Triton Kernels☆189Updated this week
- Collection of kernels written in Triton language☆175Updated 9 months ago
- An implementation of the transformer architecture onto an Nvidia CUDA kernel☆202Updated 2 years ago
- ring-attention experiments☆165Updated last year
- A bunch of kernels that might make stuff slower 😉☆75Updated last week
- AMD RAD's multi-GPU Triton-based framework for seamless multi-GPU programming☆164Updated last week
- coding CUDA everyday!☆72Updated last month
- ☆178Updated last year