huggingface / hf-rocm-kernelsLinks
☆22Updated 3 months ago
Alternatives and similar repositories for hf-rocm-kernels
Users that are interested in hf-rocm-kernels are comparing it to the libraries listed below
Sorting:
- Automatic differentiation for Triton Kernels☆11Updated 2 months ago
- ☆13Updated 3 weeks ago
- How to ensure correctness and ship LLM generated kernels in PyTorch☆107Updated this week
- ☆46Updated 5 months ago
- High-Performance SGEMM on CUDA devices☆107Updated 9 months ago
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆119Updated 3 weeks ago
- ☆42Updated last month
- TritonParse: A Compiler Tracer, Visualizer, and Reproducer for Triton Kernels☆164Updated this week
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆100Updated 3 months ago
- This repository contains companion software for the Colfax Research paper "Categorical Foundations for CuTe Layouts".☆69Updated last month
- extensible collectives library in triton☆90Updated 6 months ago
- Row-wise block scaling for fp8 quantization matrix multiplication. Solution to GPU mode AMD challenge.☆15Updated last month
- ☆31Updated 3 months ago
- Write a fast kernel and run it on Discord. See how you compare against the best!☆58Updated 2 weeks ago
- Official Problem Sets / Reference Kernels for the GPU MODE Leaderboard!☆99Updated last week
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 7 months ago
- Framework to reduce autotune overhead to zero for well known deployments.☆84Updated last month
- Learning about CUDA by writing PTX code.☆145Updated last year
- Evaluating Large Language Models for CUDA Code Generation ComputeEval is a framework designed to generate and evaluate CUDA code from Lar…☆69Updated 3 weeks ago
- Efficient implementation of DeepSeek Ops (Blockwise FP8 GEMM, MoE, and MLA) for AMD Instinct MI300X☆71Updated 2 months ago
- AMD RAD's multi-GPU Triton-based framework for seamless multi-GPU programming☆93Updated this week
- A bunch of kernels that might make stuff slower 😉☆62Updated this week
- ☆35Updated this week
- ring-attention experiments☆155Updated last year
- ☆93Updated 11 months ago
- It is an LLM-based AI agent, which can write correct and efficient gpu kernels automatically.☆35Updated 2 months ago
- ☆65Updated 6 months ago
- ☆50Updated 5 months ago
- Ahead of Time (AOT) Triton Math Library☆80Updated last week
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆264Updated this week