SzymonOzog / FastSoftmaxLinks
☆35Updated 5 months ago
Alternatives and similar repositories for FastSoftmax
Users that are interested in FastSoftmax are comparing it to the libraries listed below
Sorting:
- Reference Kernels for the Leaderboard☆55Updated this week
- Write a fast kernel and run it on Discord. See how you compare against the best!☆44Updated this week
- ☆54Updated last week
- High-Performance SGEMM on CUDA devices☆94Updated 4 months ago
- ☆215Updated this week
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆133Updated last year
- Cataloging released Triton kernels.☆229Updated 4 months ago
- Fast low-bit matmul kernels in Triton☆311Updated this week
- Collection of kernels written in Triton language☆127Updated 2 months ago
- extensible collectives library in triton☆87Updated 2 months ago
- The simplest but fast implementation of matrix multiplication in CUDA.☆35Updated 10 months ago
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆183Updated last month
- Experiment of using Tangent to autodiff triton☆79Updated last year
- ☆107Updated 2 months ago
- This repository contains the experimental PyTorch native float8 training UX☆223Updated 10 months ago
- A bunch of kernels that might make stuff slower 😉☆48Updated this week
- Fast Hadamard transform in CUDA, with a PyTorch interface☆195Updated last year
- ☆157Updated last year
- Applied AI experiments and examples for PyTorch☆274Updated last week
- ☆80Updated 7 months ago
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆67Updated 2 months ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆127Updated this week
- ring-attention experiments☆145Updated 7 months ago
- LLM training in simple, raw C/CUDA☆99Updated last year
- Fastest kernels written from scratch☆269Updated 2 months ago
- CUDA Matrix Multiplication Optimization☆189Updated 10 months ago
- Learn CUDA with PyTorch☆25Updated this week
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆109Updated 10 months ago
- ☆105Updated 9 months ago
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆43Updated 2 months ago