tgautam03 / tGeMMLinks
General Matrix Multiplication using NVIDIA Tensor Cores
☆27Updated 11 months ago
Alternatives and similar repositories for tGeMM
Users that are interested in tGeMM are comparing it to the libraries listed below
Sorting:
- Custom PTX Instruction Benchmark☆137Updated 10 months ago
- Attention in SRAM on Tenstorrent Grayskull☆40Updated last year
- High-Performance SGEMM on CUDA devices☆115Updated 11 months ago
- ☆83Updated last month
- a mini 2x2 systolic array and PE demo☆66Updated 3 weeks ago
- Write a fast kernel and run it on Discord. See how you compare against the best!☆66Updated 3 weeks ago
- Learning about CUDA by writing PTX code.☆151Updated last year
- Personal solutions to the Triton Puzzles☆20Updated last year
- LLM training in simple, raw C/CUDA☆109Updated last year
- ☆88Updated 2 months ago
- Automatic differentiation for Triton Kernels☆29Updated 4 months ago
- Step by step implementation of a fast softmax kernel in CUDA☆59Updated last year
- My submission for the GPUMODE/AMD fp8 mm challenge☆29Updated 7 months ago
- ☆15Updated 2 months ago
- Official Problem Sets / Reference Kernels for the GPU MODE Leaderboard!☆182Updated 2 weeks ago
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆48Updated 4 months ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆104Updated 6 months ago
- TritonParse: A Compiler Tracer, Visualizer, and Reproducer for Triton Kernels☆182Updated this week
- Quantized LLM training in pure CUDA/C++.☆230Updated this week
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 9 months ago
- Super fast FP32 matrix multiplication on RDNA3☆82Updated 9 months ago
- ☆23Updated 6 months ago
- The Riallto Open Source Project from AMD☆83Updated 9 months ago
- Row-wise block scaling for fp8 quantization matrix multiplication. Solution to GPU mode AMD challenge.☆17Updated 3 months ago
- Custom kernels in Triton language for accelerating LLMs☆27Updated last year
- AMD RAD's multi-GPU Triton-based framework for seamless multi-GPU programming☆143Updated last week
- Ship correct and fast LLM kernels to PyTorch☆130Updated this week
- ☆53Updated 8 months ago
- Hand-Rolled GPU communications library☆76Updated last month
- NVIDIA tools guide☆152Updated last year