Liu-xiandong / How_to_optimize_in_GPU
This is a series of GPU optimization topics. Here we will introduce how to optimize the CUDA kernel in detail. I will introduce several basic kernel optimizations, including: elementwise, reduce, sgemv, sgemm, etc. The performance of these kernels is basically at or near the theoretical limit.
☆937Updated last year
Alternatives and similar repositories for How_to_optimize_in_GPU:
Users that are interested in How_to_optimize_in_GPU are comparing it to the libraries listed below
- row-major matmul optimization☆608Updated last year
- how to optimize some algorithm in cuda.☆1,960Updated this week
- Xiao's CUDA Optimization Guide [Active Adding New Contents]☆267Updated 2 years ago
- Optimizing SGEMM kernel functions on NVIDIA GPUs to a close-to-cuBLAS performance.☆327Updated 2 months ago
- A simple high performance CUDA GEMM implementation.☆350Updated last year
- A CUDA tutorial to make people learn CUDA program from 0☆216Updated 8 months ago
- Yinghan's Code Sample☆313Updated 2 years ago
- Several optimization methods of half-precision general matrix multiplication (HGEMM) using tensor core with WMMA API and MMA PTX instruct…☆360Updated 6 months ago
- how to learn PyTorch and OneFlow☆402Updated 11 months ago
- learning how CUDA works☆213Updated last week
- Step-by-step optimization of CUDA SGEMM☆292Updated 2 years ago
- CUDA 算子手撕与面试指南☆188Updated last month
- A self-learning tutorail for CUDA High Performance Programing.☆418Updated this week
- A Easy-to-understand TensorOp Matmul Tutorial☆326Updated 5 months ago
- Sample codes for my CUDA programming book☆1,660Updated 3 weeks ago
- ☆422Updated 9 years ago
- ☆2,349Updated last year
- 📚200+ Tensor/CUDA Cores Kernels, ⚡️flash-attn-mma, ⚡️hgemm with WMMA, MMA and CuTe (98%~100% TFLOPS of cuBLAS/FA2 🎉🎉).☆2,754Updated last week
- CUDA C 编程权威指南代码实现 包含了书上第二章到第八章的大部分代码实现和作者笔记,全由作者本人手动实现,难免有错误的地方,请大家谨慎参考,非常欢迎对错误的指正。 如果有帮助的话请Star一下,对作者帮助很大,谢谢!☆320Updated 2 years ago
- Fast CUDA matrix multiplication from scratch☆657Updated last year
- flash attention tutorial written in python, triton, cuda, cutlass☆293Updated 2 months ago
- An unofficial cuda assembler, for all generations of SASS, hopefully :)☆458Updated last year
- 高性能计算相关知识学习笔记,包含学习笔记和相关知识的代码demo,在持续完善中。 如果有帮助的话请Star一下,对作者帮助很大,谢谢!☆410Updated last year
- FlagGems is an operator library for large language models implemented in Triton Language.☆445Updated this week
- Training materials associated with NVIDIA's CUDA Training Series (www.olcf.ornl.gov/cuda-training-series/)☆715Updated 6 months ago
- Hands-On Practical MLIR Tutorial☆414Updated last year
- BLISlab: A Sandbox for Optimizing GEMM☆503Updated 3 years ago
- 校招、秋招、春招、实习好项目,带你从零动手实现支持LLama2/3和Qwen2.5的大模型推理框架。☆301Updated this week
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆397Updated this week
- 基于《cuda编程-基础与实践》(樊哲勇 著)的cuda学习之路。☆289Updated last year