Codes & examples for "CUDA - From Correctness to Performance"
☆123Oct 24, 2024Updated last year
Alternatives and similar repositories for CUDA-From-Correctness-To-Performance-Code
Users that are interested in CUDA-From-Correctness-To-Performance-Code are comparing it to the libraries listed below
Sorting:
- ☆27Jan 8, 2024Updated 2 years ago
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆320Jun 10, 2025Updated 9 months ago
- 🦙🦙.🦀☆28Sep 24, 2023Updated 2 years ago
- ☆38Aug 7, 2025Updated 7 months ago
- ☆16Apr 22, 2025Updated 10 months ago
- High performance RMSNorm Implement by using SM Core Storage(Registers and Shared Memory)☆29Jan 22, 2026Updated 2 months ago
- Wiki fo HPC☆137Jul 23, 2025Updated 7 months ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆43Feb 27, 2025Updated last year
- 使用 cutlass 实现 flash-attention 精简版,具有教学意义☆59Aug 12, 2024Updated last year
- Implement Flash Attention using Cute.☆102Dec 17, 2024Updated last year
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆19Aug 3, 2025Updated 7 months ago
- Awesome code, projects, books, etc. related to CUDA☆31Feb 3, 2026Updated last month
- [NeurIPS 2025] ClusterFusion: Expanding Operator Fusion Scope for LLM Inference via Cluster-Level Collective Primitive☆66Dec 11, 2025Updated 3 months ago
- flash attention tutorial written in python, triton, cuda, cutlass☆491Jan 20, 2026Updated 2 months ago
- AI based singing voice synthesis database generator☆13Aug 12, 2022Updated 3 years ago
- High performance Transformer implementation in C++.☆153Jan 18, 2025Updated last year
- Efficient Distributed GPU Programming for Exascale, an SC/ISC Tutorial☆354Dec 3, 2025Updated 3 months ago
- Flash Attention in ~100 lines of CUDA (forward pass only)☆1,092Dec 30, 2024Updated last year
- CUDA 8-bit Tensor Core Matrix Multiplication based on m16n16k16 WMMA API☆34Sep 15, 2023Updated 2 years ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆45Jun 11, 2025Updated 9 months ago
- learning how CUDA works☆378Mar 3, 2025Updated last year
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆95Feb 20, 2026Updated last month
- Materials for learning SGLang☆775Jan 5, 2026Updated 2 months ago
- A lightweight design for computation-communication overlap.☆225Jan 20, 2026Updated 2 months ago
- A Easy-to-understand TensorOp Matmul Tutorial☆409Mar 5, 2026Updated 2 weeks ago
- ☆234Nov 19, 2025Updated 4 months ago
- Handwritten GEMM using Intel AMX (Advanced Matrix Extension)☆17Jan 11, 2025Updated last year
- Xiao's CUDA Optimization Guide [NO LONGER ADDING NEW CONTENT]☆323Nov 8, 2022Updated 3 years ago
- 📚LeetCUDA: Modern CUDA Learn Notes with PyTorch for Beginners🐑, 200+ CUDA Kernels, Tensor Cores, HGEMM, FA-2 MMA.🎉☆9,932Updated this week
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆192Jan 28, 2025Updated last year
- ☆56May 19, 2025Updated 10 months ago
- ☆14Jan 18, 2023Updated 3 years ago
- ☆169Feb 5, 2026Updated last month
- Triton Documentation in Chinese Simplified / Triton 中文文档☆107Mar 5, 2026Updated 2 weeks ago
- https://bbuf.github.io/gpu-glossary-zh/☆26Nov 7, 2025Updated 4 months ago
- 🔥 LLM-powered GPU kernel synthesis: Train models to convert PyTorch ops into optimized Triton kernels via SFT+RL. Multi-turn compilation…☆132Nov 10, 2025Updated 4 months ago
- 鉴定网络热门并行编程框架 - 性能测评(附小彭老师锐评)已评测:Taichi、SyCL、C++、OpenMP、TBB、Mojo☆40Aug 28, 2023Updated 2 years ago
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆118Mar 13, 2024Updated 2 years ago
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆31Mar 12, 2024Updated 2 years ago