siboehm / ShallowSpeedLinks
Small scale distributed training of sequential deep learning models, built on Numpy and MPI.
☆145Updated 2 years ago
Alternatives and similar repositories for ShallowSpeed
Users that are interested in ShallowSpeed are comparing it to the libraries listed below
Sorting:
- Cataloging released Triton kernels.☆263Updated last month
- ☆240Updated this week
- ☆174Updated last year
- Learn CUDA with PyTorch☆92Updated 3 weeks ago
- Fast low-bit matmul kernels in Triton☆381Updated 3 weeks ago
- Applied AI experiments and examples for PyTorch☆299Updated 2 months ago
- ring-attention experiments☆154Updated last year
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆233Updated 5 months ago
- Write a fast kernel and run it on Discord. See how you compare against the best!☆58Updated last week
- An implementation of the transformer architecture onto an Nvidia CUDA kernel☆190Updated 2 years ago
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆193Updated 4 months ago
- Fastest kernels written from scratch☆374Updated last month
- This repository contains the experimental PyTorch native float8 training UX☆223Updated last year
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆578Updated 2 months ago
- extensible collectives library in triton☆89Updated 6 months ago
- Custom kernels in Triton language for accelerating LLMs☆26Updated last year
- Learning about CUDA by writing PTX code.☆144Updated last year
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 6 months ago
- ☆332Updated last month
- Collection of kernels written in Triton language☆157Updated 6 months ago
- Official Problem Sets / Reference Kernels for the GPU MODE Leaderboard!☆99Updated last week
- A bunch of kernels that might make stuff slower 😉☆62Updated this week
- Solve puzzles. Learn CUDA.☆64Updated last year
- Quantized LLM training in pure CUDA/C++.☆206Updated this week
- How to ensure correctness and ship LLM generated kernels in PyTorch☆66Updated last week
- PyTorch Single Controller☆438Updated last week
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark with Torch -> CUDA problems☆612Updated last week
- Fault tolerance for PyTorch (HSDP, LocalSGD, DiLoCo, Streaming DiLoCo)☆420Updated this week
- ☆121Updated 7 months ago
- ☆28Updated 9 months ago