unixpickle / learn-ptxView external linksLinks
Learning about CUDA by writing PTX code.
☆153Feb 27, 2024Updated last year
Alternatives and similar repositories for learn-ptx
Users that are interested in learn-ptx are comparing it to the libraries listed below
Sorting:
- A high-performance attention mechanism that computes softmax normalization in a single streaming pass using running accumulators (online …☆28Oct 11, 2025Updated 4 months ago
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- ☆23Jul 11, 2025Updated 7 months ago
- Persistent dense gemm for Hopper in `CuTeDSL`☆15Aug 9, 2025Updated 6 months ago
- Row-wise block scaling for fp8 quantization matrix multiplication. Solution to GPU mode AMD challenge.☆17Updated this week
- ☆15Oct 30, 2025Updated 3 months ago
- CUTLASS and CuTe Examples☆127Nov 30, 2025Updated 2 months ago
- Experimental GPU language with meta-programming☆25Sep 6, 2024Updated last year
- High-Performance FP32 GEMM on CUDA devices☆117Jan 21, 2025Updated last year
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆251May 6, 2025Updated 9 months ago
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆198Jun 1, 2025Updated 8 months ago
- My submission for the GPUMODE/AMD fp8 mm challenge☆29Jun 4, 2025Updated 8 months ago
- Fast low-bit matmul kernels in Triton☆429Feb 1, 2026Updated 2 weeks ago
- ☆90Dec 16, 2025Updated last month
- Deployment examples for FastHTML☆43Sep 11, 2024Updated last year
- Custom PTX Instruction Benchmark☆138Feb 27, 2025Updated 11 months ago
- Tile primitives for speedy kernels☆3,139Updated this week
- Cuda extensions for PyTorch☆12Dec 2, 2025Updated 2 months ago
- Speeding Up Your Python Codes 1000x☆12Apr 2, 2025Updated 10 months ago
- Several optimization methods of half-precision general matrix multiplication (HGEMM) using tensor core with WMMA API and MMA PTX instruct…☆522Sep 8, 2024Updated last year
- An experimental communicating attention kernel based on DeepEP.☆35Jul 29, 2025Updated 6 months ago
- UNet diffusion model in pure CUDA☆661Jun 28, 2024Updated last year
- ☆79Dec 27, 2024Updated last year
- rust-writing-os course of https://rust.os2edu.cn☆11Apr 29, 2022Updated 3 years ago
- Utilities for Training Very Large Models☆58Sep 25, 2024Updated last year
- Faster Pytorch bitsandbytes 4bit fp4 nn.Linear ops☆30Mar 16, 2024Updated last year
- An implement of deep learning framework and models in C☆47Apr 1, 2025Updated 10 months ago
- NanoGPT (124M) quality in 2.67B tokens☆28Sep 17, 2025Updated 4 months ago
- Flexibly track outputs and grad-outputs of torch.nn.Module.☆13Oct 6, 2023Updated 2 years ago
- Content Addressable Memory using dimensionality reduction☆13Apr 22, 2017Updated 8 years ago
- ☆42Jan 24, 2026Updated 3 weeks ago
- It's a baby compiler. (Lean btw.)☆16May 19, 2025Updated 8 months ago
- Qwen3-0.6B megakernel: 527 tok/s decode on RTX 3090 (3.8x faster than PyTorch)☆70Updated this week
- extensible collectives library in triton☆95Mar 31, 2025Updated 10 months ago
- GPU programming related news and material links☆1,967Sep 17, 2025Updated 4 months ago
- incubator repo for CUDA-TileIR backend☆102Updated this week
- CUDA Matrix Multiplication Optimization☆256Jul 19, 2024Updated last year
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆326Updated this week
- Write a fast kernel and run it on Discord. See how you compare against the best!☆71Updated this week