luongthecong123 / fp8-quant-matmulLinks
Row-wise block scaling for fp8 quantization matrix multiplication. Solution to GPU mode AMD challenge.
☆16Updated 2 months ago
Alternatives and similar repositories for fp8-quant-matmul
Users that are interested in fp8-quant-matmul are comparing it to the libraries listed below
Sorting:
- My submission for the GPUMODE/AMD fp8 mm challenge☆29Updated 5 months ago
- ☆22Updated 4 months ago
- ☆106Updated 2 weeks ago
- coding CUDA everyday!☆69Updated last week
- ☆62Updated 4 months ago
- Samples of good AI generated CUDA kernels☆91Updated 5 months ago
- Efficient implementation of DeepSeek Ops (Blockwise FP8 GEMM, MoE, and MLA) for AMD Instinct MI300X☆70Updated last week
- It is an LLM-based AI agent, which can write correct and efficient gpu kernels automatically.☆38Updated this week
- ☆63Updated this week
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 7 months ago
- Fast and Furious AMD Kernels☆110Updated this week
- Quantized LLM training in pure CUDA/C++.☆215Updated this week
- High-Performance SGEMM on CUDA devices☆110Updated 9 months ago
- ☆37Updated 5 months ago
- CUDA-L1: Improving CUDA Optimization via Contrastive Reinforcement Learning☆244Updated 2 weeks ago
- General Matrix Multiplication using NVIDIA Tensor Cores☆24Updated 9 months ago
- Custom PTX Instruction Benchmark☆132Updated 8 months ago
- TritonParse: A Compiler Tracer, Visualizer, and Reproducer for Triton Kernels☆171Updated this week
- ☆31Updated 4 months ago
- Learning about CUDA by writing PTX code.☆147Updated last year
- Official Problem Sets / Reference Kernels for the GPU MODE Leaderboard!☆140Updated this week
- ☆106Updated 5 months ago
- Lightweight Python Wrapper for OpenVINO, enabling LLM inference on NPUs☆23Updated 11 months ago
- ☆65Updated 6 months ago
- Write a fast kernel and run it on Discord. See how you compare against the best!☆61Updated this week
- Low overhead tracing library and trace visualizer for pipelined CUDA kernels☆94Updated this week
- [WIP] Better (FP8) attention for Hopper☆32Updated 8 months ago
- LLM Inference on consumer devices☆125Updated 7 months ago
- ☆49Updated 6 months ago
- Hand-Rolled GPU communications library☆58Updated this week