Learn CUDA with PyTorch
☆231Feb 23, 2026Updated last week
Alternatives and similar repositories for learn-cuda
Users that are interested in learn-cuda are comparing it to the libraries listed below
Sorting:
- My submission for the GPUMODE/AMD fp8 mm challenge☆29Jun 4, 2025Updated 9 months ago
- A Quirky Assortment of CuTe Kernels☆838Updated this week
- ☆32Jul 2, 2025Updated 8 months ago
- A curated list of resources for learning and exploring Triton, OpenAI's programming language for writing efficient GPU code.☆461Mar 10, 2025Updated 11 months ago
- Tiny-Megatron, a minimalistic re-implementation of the Megatron library☆23Sep 1, 2025Updated 6 months ago
- From Minimal GEMM to Everything☆163Feb 10, 2026Updated 3 weeks ago
- Repository for go shared libraries (for now).☆11Dec 1, 2025Updated 3 months ago
- Explore training for quantized models☆26Jul 12, 2025Updated 7 months ago
- Prototype routines for GPU quantization written using PyTorch.☆21Feb 8, 2026Updated 3 weeks ago
- GPU programming related news and material links☆2,010Sep 17, 2025Updated 5 months ago
- Cataloging released Triton kernels.☆295Sep 9, 2025Updated 5 months ago
- ☆91Feb 29, 2024Updated 2 years ago
- ☆21Mar 3, 2025Updated last year
- See https://github.com/cuda-mode/triton-index/ instead!☆11May 8, 2024Updated last year
- JAX implementation of GPTQ quantization algorithm☆10Jul 19, 2023Updated 2 years ago
- Fast low-bit matmul kernels in Triton☆436Feb 1, 2026Updated last month
- Cute layout visualization☆30Jan 18, 2026Updated last month
- ☆12Jan 4, 2024Updated 2 years ago
- Row-wise block scaling for fp8 quantization matrix multiplication. Solution to GPU mode AMD challenge.☆17Feb 9, 2026Updated 3 weeks ago
- Simple (fast) transformer inference in PyTorch with torch.compile + lit-llama code☆10Aug 29, 2023Updated 2 years ago
- ring-attention experiments☆165Oct 17, 2024Updated last year
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languag…☆133Feb 21, 2026Updated last week
- ☆30Jan 26, 2023Updated 3 years ago
- FlexAttention w/ FlashAttention3 Support☆27Oct 5, 2024Updated last year
- let coding agents use ncu skills analysis cuda program automatically!☆47Feb 5, 2026Updated last month
- C++20 N-dimensional Matrix class for hobby project☆23Nov 11, 2021Updated 4 years ago
- ☆14Nov 3, 2025Updated 4 months ago
- ☆301Updated this week
- A place to store reusable transformer components of my own creation or found on the interwebs☆73Updated this week
- Transformers training in a supercomputer with the 🤗 Stack and Slurm☆15May 9, 2024Updated last year
- My tests and experiments with some popular dl frameworks.☆17Sep 11, 2025Updated 5 months ago
- Tile primitives for speedy kernels☆3,202Feb 24, 2026Updated last week
- flash attention tutorial written in python, triton, cuda, cutlass☆490Jan 20, 2026Updated last month
- Fastest kernels written from scratch☆550Sep 18, 2025Updated 5 months ago
- ☆46May 24, 2025Updated 9 months ago
- Learning about CUDA by writing PTX code.☆153Feb 27, 2024Updated 2 years ago
- ☆262Jul 11, 2024Updated last year
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆335Nov 2, 2025Updated 4 months ago
- ☆15Jul 9, 2024Updated last year