luongthecong123 / fp8-quant-matmulLinks
Row-wise block scaling for fp8 quantization matrix multiplication. Solution to GPU mode AMD challenge.
☆17Updated 3 months ago
Alternatives and similar repositories for fp8-quant-matmul
Users that are interested in fp8-quant-matmul are comparing it to the libraries listed below
Sorting:
- My submission for the GPUMODE/AMD fp8 mm challenge☆29Updated 6 months ago
- ☆114Updated last month
- ☆23Updated 5 months ago
- ☆68Updated 6 months ago
- Samples of good AI generated CUDA kernels☆96Updated 7 months ago
- TritonParse: A Compiler Tracer, Visualizer, and Reproducer for Triton Kernels☆179Updated this week
- It is an LLM-based AI agent, which can write correct and efficient gpu kernels automatically.☆49Updated last week
- ☆32Updated 5 months ago
- ☆43Updated 7 months ago
- coding CUDA everyday!☆72Updated 3 weeks ago
- General Matrix Multiplication using NVIDIA Tensor Cores☆27Updated 11 months ago
- [WIP] Better (FP8) attention for Hopper☆32Updated 10 months ago
- Experimental GPU language with meta-programming☆24Updated last year
- High-Performance SGEMM on CUDA devices☆114Updated 11 months ago
- Lightweight Python Wrapper for OpenVINO, enabling LLM inference on NPUs☆26Updated last year
- ☆115Updated 7 months ago
- LLM Inference on consumer devices☆128Updated 9 months ago
- PipeInfer: Accelerating LLM Inference using Asynchronous Pipelined Speculation☆31Updated last year
- ☆65Updated 8 months ago
- ☆84Updated 3 weeks ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆46Updated 6 months ago
- ☆84Updated 2 weeks ago
- Low overhead tracing library and trace visualizer for pipelined CUDA kernels☆127Updated last month
- CUDA-L2: Surpassing cuBLAS Performance for Matrix Multiplication through Reinforcement Learning☆252Updated 2 weeks ago
- ☆52Updated 7 months ago
- Measuring Thinking Efficiency in Reasoning Models - Research Repository☆37Updated 3 weeks ago
- Lightweight Llama 3 8B Inference Engine in CUDA C☆53Updated 9 months ago
- Cookbook of SGLang - Recipe☆48Updated this week
- Efficient implementation of DeepSeek Ops (Blockwise FP8 GEMM, MoE, and MLA) for AMD Instinct MI300X☆74Updated last month
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆21Updated 2 weeks ago