moritztng / grayskull-attentionLinks
Attention in SRAM on Tenstorrent Grayskull
☆38Updated last year
Alternatives and similar repositories for grayskull-attention
Users that are interested in grayskull-attention are comparing it to the libraries listed below
Sorting:
- Tenstorrent MLIR compiler☆199Updated this week
- High-Performance SGEMM on CUDA devices☆107Updated 9 months ago
- Custom PTX Instruction Benchmark☆131Updated 8 months ago
- Tenstorrent's MLIR Based Compiler. We aim to enable developers to run AI on all configurations of Tenstorrent hardware, through an open-s…☆126Updated last week
- General Matrix Multiplication using NVIDIA Tensor Cores☆22Updated 9 months ago
- The TT-Forge FE is a graph compiler designed to optimize and transform computational graphs for deep learning models, enhancing their per…☆50Updated last week
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆45Updated 2 months ago
- Official Problem Sets / Reference Kernels for the GPU MODE Leaderboard!☆99Updated last week
- ☆93Updated 11 months ago
- MLIR-based partitioning system☆139Updated this week
- ☆42Updated last month
- ☆46Updated 5 months ago
- Buda Compiler Backend for Tenstorrent devices☆30Updated 6 months ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆100Updated 3 months ago
- Repo for AI Compiler team. The intended purpose of this repo is for implementation of a PJRT device.☆36Updated this week
- Super fast FP32 matrix multiplication on RDNA3☆76Updated 6 months ago
- This repository contains companion software for the Colfax Research paper "Categorical Foundations for CuTe Layouts".☆69Updated last month
- An experimental CPU backend for Triton☆154Updated last week
- TritonParse: A Compiler Tracer, Visualizer, and Reproducer for Triton Kernels☆164Updated this week
- LLM training in simple, raw C/CUDA☆107Updated last year
- Automatic differentiation for Triton Kernels☆11Updated 2 months ago
- A framework that support executing unmodified CUDA source code on non-NVIDIA devices.☆136Updated 9 months ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆117Updated last year
- The Riallto Open Source Project from AMD☆84Updated 6 months ago
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆119Updated 3 weeks ago
- Test suite for probing the numerical behavior of NVIDIA tensor cores☆41Updated last year
- ☆13Updated 3 weeks ago
- How to ensure correctness and ship LLM generated kernels in PyTorch☆107Updated this week
- Unofficial description of the CUDA assembly (SASS) instruction sets.☆149Updated 3 months ago
- Framework to reduce autotune overhead to zero for well known deployments.☆84Updated last month