luongthecong123 / fp8-quant-matmulLinks
Row-wise block scaling for fp8 quantization matrix multiplication. Solution to GPU mode AMD challenge.
☆16Updated 2 months ago
Alternatives and similar repositories for fp8-quant-matmul
Users that are interested in fp8-quant-matmul are comparing it to the libraries listed below
Sorting:
- My submission for the GPUMODE/AMD fp8 mm challenge☆29Updated 6 months ago
- ☆22Updated 4 months ago
- ☆111Updated 2 weeks ago
- It is an LLM-based AI agent, which can write correct and efficient gpu kernels automatically.☆43Updated this week
- Samples of good AI generated CUDA kernels☆92Updated 6 months ago
- ☆75Updated 3 weeks ago
- coding CUDA everyday!☆71Updated 3 weeks ago
- High-Performance SGEMM on CUDA devices☆113Updated 10 months ago
- ☆14Updated last month
- ☆38Updated 6 months ago
- Write a fast kernel and run it on Discord. See how you compare against the best!☆61Updated last week
- ☆31Updated 5 months ago
- ☆64Updated 5 months ago
- TritonParse: A Compiler Tracer, Visualizer, and Reproducer for Triton Kernels☆177Updated this week
- Implementation of a methodology that allows all sorts of user defined GPU kernel fusion, for non CUDA programmers.☆32Updated this week
- Efficient implementation of DeepSeek Ops (Blockwise FP8 GEMM, MoE, and MLA) for AMD Instinct MI300X☆73Updated 2 weeks ago
- Lightweight Python Wrapper for OpenVINO, enabling LLM inference on NPUs☆25Updated 11 months ago
- Hand-Rolled GPU communications library☆72Updated last week
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆102Updated 5 months ago
- CUDA-L1: Improving CUDA Optimization via Contrastive Reinforcement Learning☆247Updated last month
- Learning about CUDA by writing PTX code.☆148Updated last year
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 8 months ago
- Quantized LLM training in pure CUDA/C++.☆220Updated this week
- Fast and Furious AMD Kernels☆309Updated last week
- Low overhead tracing library and trace visualizer for pipelined CUDA kernels☆116Updated last week
- LLM Inference on consumer devices☆125Updated 8 months ago
- ☆65Updated 7 months ago
- ☆113Updated 6 months ago
- ☆51Updated 6 months ago
- [WIP] Better (FP8) attention for Hopper☆32Updated 9 months ago