cloneofsimo / ptx-tutorial-by-aislopLinks
PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)
☆67Updated 2 months ago
Alternatives and similar repositories for ptx-tutorial-by-aislop
Users that are interested in ptx-tutorial-by-aislop are comparing it to the libraries listed below
Sorting:
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆184Updated last week
- Write a fast kernel and run it on Discord. See how you compare against the best!☆44Updated this week
- Learning about CUDA by writing PTX code.☆131Updated last year
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆181Updated 3 weeks ago
- NanoGPT-speedrunning for the poor T4 enjoyers☆66Updated last month
- PCCL (Prime Collective Communications Library) implements fault tolerant collective communications over IP☆88Updated last week
- High-Performance SGEMM on CUDA devices☆92Updated 4 months ago
- Load compute kernels from the Hub☆139Updated this week
- making the official triton tutorials actually comprehensible☆34Updated 2 months ago
- Fast low-bit matmul kernels in Triton☆303Updated last week
- pytorch from scratch in pure C/CUDA and python☆40Updated 7 months ago
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆115Updated this week
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆133Updated last year
- ring-attention experiments☆143Updated 7 months ago
- A bunch of kernels that might make stuff slower 😉☆46Updated this week
- ☆34Updated 4 months ago
- ☆157Updated last year
- ☆88Updated last year
- A really tiny autograd engine☆94Updated last week
- extensible collectives library in triton☆87Updated 2 months ago
- ☆78Updated 10 months ago
- Collection of autoregressive model implementation☆85Updated last month
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark with Torch -> CUDA problems☆351Updated 3 weeks ago
- ☆46Updated 2 months ago
- ☆210Updated last week
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆127Updated this week
- kernels, of the mega variety☆329Updated this week
- Collection of kernels written in Triton language☆125Updated last month
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆100Updated 2 months ago
- Learn CUDA with PyTorch☆21Updated this week