SzymonOzog / PennyLinks
Hand-Rolled GPU communications library
☆81Updated 2 months ago
Alternatives and similar repositories for Penny
Users that are interested in Penny are comparing it to the libraries listed below
Sorting:
- Write a fast kernel and run it on Discord. See how you compare against the best!☆68Updated this week
- Quantized LLM training in pure CUDA/C++.☆235Updated 2 weeks ago
- CUDA-L2: Surpassing cuBLAS Performance for Matrix Multiplication through Reinforcement Learning☆383Updated 3 weeks ago
- Ship correct and fast LLM kernels to PyTorch☆139Updated 2 weeks ago
- Official Problem Sets / Reference Kernels for the GPU MODE Leaderboard!☆200Updated this week
- Tilus is a tile-level kernel programming language with explicit control over shared memory and registers.☆440Updated last month
- extensible collectives library in triton☆93Updated 10 months ago
- Parallel framework for training and fine-tuning deep neural networks☆70Updated 2 months ago
- High-Performance FP32 GEMM on CUDA devices☆117Updated last year
- Fast and Furious AMD Kernels☆346Updated last week
- ring-attention experiments☆165Updated last year
- ☆91Updated this week
- ☆277Updated last week
- Fast low-bit matmul kernels in Triton☆424Updated this week
- torchcomms: a modern PyTorch communications API☆323Updated last week
- TritonParse: A Compiler Tracer, Visualizer, and Reproducer for Triton Kernels☆189Updated this week
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 10 months ago
- AMD RAD's multi-GPU Triton-based framework for seamless multi-GPU programming☆164Updated this week
- Our first fully AI generated deep learning system☆429Updated last week
- Learning about CUDA by writing PTX code.☆151Updated last year
- PCCL (Prime Collective Communications Library) implements fault tolerant collective communications over IP☆141Updated 4 months ago
- A bunch of kernels that might make stuff slower 😉