luongthecong123 / fp8-quant-matmulLinks
Row-wise block scaling for fp8 quantization matrix multiplication. Solution to GPU mode AMD challenge.
☆15Updated last week
Alternatives and similar repositories for fp8-quant-matmul
Users that are interested in fp8-quant-matmul are comparing it to the libraries listed below
Sorting:
- My submission for the GPUMODE/AMD fp8 mm challenge☆28Updated 3 months ago
- ☆92Updated 3 weeks ago
- TritonParse: A Compiler Tracer, Visualizer, and mini-Reproducer Generator(WIP) for Triton Kernels☆150Updated this week
- ☆56Updated 2 months ago
- Samples of good AI generated CUDA kernels☆89Updated 3 months ago
- Efficient implementation of DeepSeek Ops (Blockwise FP8 GEMM, MoE, and MLA) for AMD Instinct MI300X☆69Updated last month
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆87Updated last week
- coding CUDA everyday!☆61Updated 4 months ago
- High-Performance SGEMM on CUDA devices☆101Updated 7 months ago
- ☆64Updated 4 months ago
- How to ship your LLM generated kernels to PyTorch☆49Updated this week
- LLM Inference on consumer devices☆124Updated 6 months ago
- General Matrix Multiplication using NVIDIA Tensor Cores☆21Updated 7 months ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆97Updated 2 months ago
- Automatic differentiation for Triton Kernels☆11Updated last month
- Custom PTX Instruction Benchmark☆126Updated 6 months ago
- Learning about CUDA by writing PTX code.☆135Updated last year
- CUDA-L1: Improving CUDA Optimization via Contrastive Reinforcement Learning☆183Updated last month
- ☆30Updated 2 months ago
- PipeInfer: Accelerating LLM Inference using Asynchronous Pipelined Speculation☆30Updated 10 months ago
- Lightweight Python Wrapper for OpenVINO, enabling LLM inference on NPUs☆23Updated 9 months ago
- Write a fast kernel and run it on Discord. See how you compare against the best!☆55Updated last week
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 5 months ago
- It is an LLM-based AI agent, which can write correct and efficient gpu kernels automatically.☆28Updated last month
- Official Problem Sets / Reference Kernels for the GPU MODE Leaderboard!☆85Updated last week
- [WIP] Better (FP8) attention for Hopper☆32Updated 6 months ago
- ☆29Updated 3 months ago
- ☆95Updated 3 months ago
- ☆150Updated 2 months ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆42Updated 3 months ago