cloneofsimo / ptx-tutorial-by-aislop
PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)
☆66Updated last month
Alternatives and similar repositories for ptx-tutorial-by-aislop
Users that are interested in ptx-tutorial-by-aislop are comparing it to the libraries listed below
Sorting:
- Learning about CUDA by writing PTX code.☆129Updated last year
- NanoGPT-speedrunning for the poor T4 enjoyers☆65Updated 3 weeks ago
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆180Updated this week
- Load compute kernels from the Hub☆116Updated last week
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆171Updated last week
- Write a fast kernel and run it on Discord. See how you compare against the best!☆44Updated last week
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆103Updated last week
- making the official triton tutorials actually comprehensible☆28Updated last month
- Fast low-bit matmul kernels in Triton☆299Updated this week
- High-Performance SGEMM on CUDA devices☆91Updated 3 months ago
- ☆79Updated 10 months ago
- ☆155Updated last year
- Collection of autoregressive model implementation☆85Updated 3 weeks ago
- ring-attention experiments☆140Updated 6 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆98Updated 2 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆126Updated 5 months ago
- This repository contain the simple llama3 implementation in pure jax.☆63Updated 2 months ago
- Collection of kernels written in Triton language☆122Updated last month
- Code for data-aware compression of DeepSeek models☆24Updated last month
- Make triton easier☆47Updated 11 months ago
- pytorch from scratch in pure C/CUDA and python☆40Updated 7 months ago
- A really tiny autograd engine☆92Updated last year
- prime-rl is a codebase for decentralized RL training at scale☆211Updated this week
- train with kittens!☆57Updated 6 months ago
- ☆186Updated 3 months ago
- ☆52Updated this week
- KV cache compression for high-throughput LLM inference☆126Updated 3 months ago
- RWKV-7: Surpassing GPT☆84Updated 5 months ago
- ☆204Updated 2 weeks ago
- A bunch of kernels that might make stuff slower 😉☆40Updated this week