unixpickle / learn-ptxLinks
Learning about CUDA by writing PTX code.
☆133Updated last year
Alternatives and similar repositories for learn-ptx
Users that are interested in learn-ptx are comparing it to the libraries listed below
Sorting:
- High-Performance SGEMM on CUDA devices☆98Updated 6 months ago
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 4 months ago
- Simple MPI implementation for prototyping or learning☆272Updated 2 weeks ago
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆137Updated last year
- LLM training in simple, raw C/CUDA☆102Updated last year
- pytorch from scratch in pure C/CUDA and python☆40Updated 9 months ago
- ☆66Updated this week
- Custom PTX Instruction Benchmark☆126Updated 5 months ago
- ☆162Updated last year
- Write a fast kernel and run it on Discord. See how you compare against the best!☆48Updated this week
- ☆88Updated last year
- An implementation of the transformer architecture onto an Nvidia CUDA kernel☆189Updated last year
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆205Updated 3 months ago
- TritonParse: A Compiler Tracer, Visualizer, and mini-Reproducer(WIP) for Triton Kernels☆138Updated this week
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆188Updated 2 months ago
- ☆47Updated 7 months ago
- Alex Krizhevsky's original code from Google Code☆195Updated 9 years ago
- Multi-Threaded FP32 Matrix Multiplication on x86 CPUs☆350Updated 3 months ago
- ☆227Updated last week
- Official Problem Sets / Reference Kernels for the GPU MODE Leaderboard!☆69Updated 2 weeks ago
- A curated list of resources for learning and exploring Triton, OpenAI's programming language for writing efficient GPU code.☆383Updated 4 months ago
- Learnings and programs related to CUDA☆414Updated last month
- ☆74Updated last year
- Nvidia Instruction Set Specification Generator☆285Updated last year
- Fast low-bit matmul kernels in Triton☆338Updated last week
- in this repository, i'm going to implement increasingly complex llm inference optimizations☆64Updated 2 months ago
- Efficient implementation of DeepSeek Ops (Blockwise FP8 GEMM, MoE, and MLA) for AMD Instinct MI300X☆60Updated this week
- coding CUDA everyday!☆53Updated 3 months ago
- ring-attention experiments☆146Updated 9 months ago
- Cataloging released Triton kernels.☆247Updated 6 months ago