cloneofsimo / ptx-tutorial-by-aislopLinks
PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)
☆66Updated 9 months ago
Alternatives and similar repositories for ptx-tutorial-by-aislop
Users that are interested in ptx-tutorial-by-aislop are comparing it to the libraries listed below
Sorting:
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆195Updated 6 months ago
- Learning about CUDA by writing PTX code.☆150Updated last year
- Quantized LLM training in pure CUDA/C++.☆224Updated last week
- Write a fast kernel and run it on Discord. See how you compare against the best!☆64Updated this week
- 👷 Build compute kernels☆195Updated this week
- Learn CUDA with PyTorch☆138Updated this week
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆153Updated 2 years ago
- ☆81Updated last week
- ring-attention experiments☆160Updated last year
- MoE training for Me and You and maybe other people☆239Updated last week
- Ship correct and fast LLM kernels to PyTorch☆126Updated last week
- coding CUDA everyday!☆71Updated 2 weeks ago
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆327Updated last month
- PCCL (Prime Collective Communications Library) implements fault tolerant collective communications over IP☆141Updated 3 months ago
- NanoGPT-speedrunning for the poor T4 enjoyers☆73Updated 8 months ago
- ☆178Updated last year
- High-Performance SGEMM on CUDA devices☆113Updated 11 months ago
- Official Problem Sets / Reference Kernels for the GPU MODE Leaderboard!☆177Updated this week
- Load compute kernels from the Hub☆352Updated last week
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆244Updated 7 months ago
- ☆91Updated last year
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languag…☆121Updated 2 months ago
- A curated list of resources for learning and exploring Triton, OpenAI's programming language for writing efficient GPU code.☆444Updated 9 months ago
- SIMD quantization kernels☆93Updated 3 months ago
- Simple MPI implementation for prototyping or learning☆293Updated 4 months ago
- making the official triton tutorials actually comprehensible☆80Updated 4 months ago
- NSA Triton Kernels written with GPT5 and Opus 4.1☆69Updated 4 months ago
- A bunch of kernels that might make stuff slower 😉☆69Updated this week
- train with kittens!☆63Updated last year
- Official implementation for Training LLMs with MXFP4☆115Updated 8 months ago