tpn / cuda-by-example
Code for NVIDIA's CUDA By Example Book.
☆40Updated 4 years ago
Related projects ⓘ
Alternatives and complementary repositories for cuda-by-example
- CUDA Matrix Multiplication Optimization☆139Updated 3 months ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆98Updated 2 months ago
- ☆54Updated last year
- Some CUDA design patterns and a bit of template magic for CUDA☆146Updated last year
- ☆15Updated 7 months ago
- Tutorials for writing high-performance GPU operators in AI frameworks.☆122Updated last year
- Common libraries for PPL projects☆29Updated 3 weeks ago
- Step-by-step optimization of CUDA SGEMM☆225Updated 2 years ago
- TiledCUDA is a highly efficient kernel template library designed to elevate CUDA C’s level of abstraction for processing tiles.☆148Updated this week
- ☆22Updated 4 years ago
- TVMScript kernel for deformable attention☆24Updated 2 years ago
- A simple high performance CUDA GEMM implementation.☆334Updated 10 months ago
- CUDA 8-bit Tensor Core Matrix Multiplication based on m16n16k16 WMMA API☆26Updated last year
- ☆142Updated last year
- μ-Cuda, COVER THE LAST MILE OF CUDA. With features: intellisense-friendly, structured launch, automatic cuda graph generation and updatin…☆149Updated this week
- study of cutlass☆19Updated last year
- ☆21Updated last year
- flash attention tutorial written in python, triton, cuda, cutlass☆194Updated 4 months ago
- FP64 equivalent GEMM via Int8 Tensor Cores using the Ozaki scheme☆46Updated 2 months ago
- llama INT4 cuda inference with AWQ☆47Updated 4 months ago
- Examples from Programming in Parallel with CUDA☆107Updated last year
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆87Updated 3 months ago
- Several optimization methods of half-precision general matrix vector multiplication (HGEMV) using CUDA core.☆48Updated 2 months ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆85Updated 8 months ago
- Matrix Multiply-Accumulate with CUDA and WMMA( Tensor Core)☆114Updated 4 years ago
- ☆162Updated 3 months ago
- An extension library of WMMA API (Tensor Core API)☆82Updated 3 months ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆26Updated 2 months ago
- Training material for Nsight developer tools☆128Updated 3 months ago
- ☆45Updated last month