luongthecong123 / fp8-quant-matmulLinks
Row-wise block scaling for fp8 quantization matrix multiplication. Solution to GPU mode AMD challenge.
☆15Updated last month
Alternatives and similar repositories for fp8-quant-matmul
Users that are interested in fp8-quant-matmul are comparing it to the libraries listed below
Sorting:
- My submission for the GPUMODE/AMD fp8 mm challenge☆29Updated 4 months ago
- ☆22Updated 3 months ago
- ☆102Updated this week
- Samples of good AI generated CUDA kernels☆91Updated 4 months ago
- It is an LLM-based AI agent, which can write correct and efficient gpu kernels automatically.☆35Updated 2 months ago
- TritonParse: A Compiler Tracer, Visualizer, and Reproducer for Triton Kernels☆164Updated this week
- ☆60Updated 4 months ago
- ☆35Updated 5 months ago
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 7 months ago
- Learning about CUDA by writing PTX code.☆145Updated last year
- coding CUDA everyday!☆64Updated 6 months ago
- Efficient implementation of DeepSeek Ops (Blockwise FP8 GEMM, MoE, and MLA) for AMD Instinct MI300X☆71Updated 2 months ago
- High-Performance SGEMM on CUDA devices☆107Updated 9 months ago
- Custom PTX Instruction Benchmark☆131Updated 8 months ago
- CUDA-L1: Improving CUDA Optimization via Contrastive Reinforcement Learning☆192Updated 2 months ago
- Quantized LLM training in pure CUDA/C++.☆206Updated last week
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆100Updated 3 months ago
- ☆42Updated last month
- ☆31Updated 3 months ago
- ☆13Updated 3 weeks ago
- ☆101Updated 5 months ago
- Write a fast kernel and run it on Discord. See how you compare against the best!☆58Updated 2 weeks ago
- General Matrix Multiplication using NVIDIA Tensor Cores☆22Updated 9 months ago
- torchcomms: a modern PyTorch communications API☆103Updated this week
- Automatic differentiation for Triton Kernels☆11Updated 2 months ago
- ☆46Updated 5 months ago
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆119Updated 3 weeks ago
- LLM Inference on consumer devices☆124Updated 7 months ago
- Implementation of a methodology that allows all sorts of user defined GPU kernel fusion, for non CUDA programmers.☆25Updated last week
- Lightweight Python Wrapper for OpenVINO, enabling LLM inference on NPUs☆23Updated 10 months ago