SzymonOzog / PennyLinks
Hand-Rolled GPU communications library
☆72Updated last week
Alternatives and similar repositories for Penny
Users that are interested in Penny are comparing it to the libraries listed below
Sorting:
- Quantized LLM training in pure CUDA/C++.☆220Updated this week
- High-Performance SGEMM on CUDA devices☆112Updated 10 months ago
- Write a fast kernel and run it on Discord. See how you compare against the best!☆61Updated this week
- Learning about CUDA by writing PTX code.☆148Updated last year
- ring-attention experiments☆160Updated last year
- Ship correct and fast LLM kernels to PyTorch☆124Updated 2 weeks ago
- 👷 Build compute kernels☆190Updated this week
- Fast low-bit matmul kernels in Triton☆401Updated last week
- Official Problem Sets / Reference Kernels for the GPU MODE Leaderboard!☆164Updated this week
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆244Updated 6 months ago
- A bunch of kernels that might make stuff slower 😉☆65Updated this week
- Tilus is a tile-level kernel programming language with explicit control over shared memory and registers.☆408Updated this week
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 8 months ago
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆151Updated 2 years ago
- Fast and Furious AMD Kernels☆298Updated last week
- An early research stage MoE load balancer based on inear programming.☆415Updated 2 weeks ago
- PCCL (Prime Collective Communications Library) implements fault tolerant collective communications over IP☆138Updated 2 months ago
- TritonParse: A Compiler Tracer, Visualizer, and Reproducer for Triton Kernels☆175Updated last week
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆196Updated 6 months ago
- Collection of kernels written in Triton language☆169Updated 7 months ago
- extensible collectives library in triton☆91Updated 8 months ago
- ☆256Updated last week
- Learn CUDA with PyTorch☆117Updated last week
- A Python-embedded DSL that makes it easy to write fast, scalable ML kernels with minimal boilerplate.☆640Updated this week
- ☆219Updated 10 months ago
- Parallel framework for training and fine-tuning deep neural networks☆70Updated 3 weeks ago
- train with kittens!☆63Updated last year
- NVIDIA NVSHMEM is a parallel programming interface for NVIDIA GPUs based on OpenSHMEM. NVSHMEM can significantly reduce multi-process com…☆402Updated 2 weeks ago
- An implementation of the transformer architecture onto an Nvidia CUDA kernel☆195Updated 2 years ago
- kernels, of the mega variety☆614Updated 2 months ago