siboehm / ShallowSpeed
Small scale distributed training of sequential deep learning models, built on Numpy and MPI.
☆130Updated last year
Alternatives and similar repositories for ShallowSpeed:
Users that are interested in ShallowSpeed are comparing it to the libraries listed below
- ☆202Updated last week
- Cataloging released Triton kernels.☆220Updated 3 months ago
- extensible collectives library in triton☆85Updated last month
- ring-attention experiments☆132Updated 6 months ago
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆169Updated last month
- Fast low-bit matmul kernels in Triton☆295Updated this week
- Write a fast kernel and run it on Discord. See how you compare against the best!☆44Updated this week
- Applied AI experiments and examples for PyTorch☆262Updated last week
- An implementation of the transformer architecture onto an Nvidia CUDA kernel☆180Updated last year
- Fastest kernels written from scratch☆252Updated last month
- seqax = sequence modeling + JAX☆155Updated 3 weeks ago
- ☆155Updated last year
- Collection of kernels written in Triton language☆120Updated last month
- This repository contains the experimental PyTorch native float8 training UX☆224Updated 9 months ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆120Updated this week
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark with Torch -> CUDA problems☆288Updated last week
- A bunch of kernels that might make stuff slower 😉☆40Updated this week
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆65Updated last month
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆180Updated last week
- ☆102Updated last month
- Experiment of using Tangent to autodiff triton☆78Updated last year
- Custom kernels in Triton language for accelerating LLMs☆18Updated last year
- Learning about CUDA by writing PTX code.☆128Updated last year
- Solve puzzles. Learn CUDA.☆64Updated last year
- Home for "How To Scale Your Model", a short blog-style textbook about scaling LLMs on TPUs☆258Updated last week
- ☆88Updated last year
- ☆31Updated 3 months ago
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆536Updated last week
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆244Updated this week
- The simplest but fast implementation of matrix multiplication in CUDA.☆34Updated 9 months ago