luongthecong123 / fp8-quant-matmulLinks
Row-wise block scaling for fp8 quantization matrix multiplication. Solution to GPU mode AMD challenge.
☆17Updated 4 months ago
Alternatives and similar repositories for fp8-quant-matmul
Users that are interested in fp8-quant-matmul are comparing it to the libraries listed below
Sorting:
- My submission for the GPUMODE/AMD fp8 mm challenge☆29Updated 7 months ago
- ☆117Updated 3 weeks ago
- ☆23Updated 6 months ago
- It is an LLM-based AI agent, which can write correct and efficient gpu kernels automatically.☆56Updated last week
- mHC kernels implemented in CUDA☆233Updated 2 weeks ago
- ☆71Updated 7 months ago
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 10 months ago
- Lightweight Python Wrapper for OpenVINO, enabling LLM inference on NPUs☆26Updated last year
- Efficient implementation of DeepSeek Ops (Blockwise FP8 GEMM, MoE, and MLA) for AMD Instinct MI300X☆75Updated 2 months ago
- ☆87Updated this week
- Custom PTX Instruction Benchmark☆138Updated 11 months ago
- ☆117Updated 8 months ago
- General Matrix Multiplication using NVIDIA Tensor Cores☆28Updated last year
- CUDA-L2: Surpassing cuBLAS Performance for Matrix Multiplication through Reinforcement Learning☆383Updated 3 weeks ago
- High-Performance FP32 GEMM on CUDA devices☆117Updated last year
- coding CUDA everyday!☆72Updated last month
- Samples of good AI generated CUDA kernels☆99Updated 8 months ago
- We aim to redefine Data Parallel libraries portabiliy, performance, programability and maintainability, by using C++ standard features, i…☆38Updated this week
- ☆44Updated 8 months ago
- Learning about CUDA by writing PTX code.☆151Updated last year
- [WIP] Better (FP8) attention for Hopper☆32Updated 11 months ago
- ☆15Updated 2 months ago
- ☆65Updated 9 months ago
- CUDA-L1: Improving CUDA Optimization via Contrastive Reinforcement Learning☆285Updated 2 months ago
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆21Updated last month
- ☆32Updated 6 months ago
- Kernel Library Wheel for SGLang☆17Updated this week
- ☆90Updated last month
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆46Updated 7 months ago
- ☆52Updated 8 months ago