gpu-mode / resource-stream
GPU programming related news and material links
☆1,368Updated last month
Alternatives and similar repositories for resource-stream:
Users that are interested in resource-stream are comparing it to the libraries listed below
- Puzzles for learning Triton☆1,403Updated 3 months ago
- Material for gpu-mode lectures☆3,731Updated last week
- What would you do with 1000 H100s...☆999Updated last year
- Tile primitives for speedy kernels☆2,042Updated this week
- An ML Systems Onboarding list☆694Updated 3 weeks ago
- Flash Attention in ~100 lines of CUDA (forward pass only)☆699Updated last month
- Fast CUDA matrix multiplication from scratch☆632Updated last year
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆514Updated this week
- UNet diffusion model in pure CUDA☆598Updated 7 months ago
- Training materials associated with NVIDIA's CUDA Training Series (www.olcf.ornl.gov/cuda-training-series/)☆699Updated 6 months ago
- Mirage: Automatically Generating Fast GPU Kernels without Programming in Triton/CUDA☆743Updated last week
- Building blocks for foundation models.☆448Updated last year
- FlashInfer: Kernel Library for LLM Serving☆2,078Updated this week
- Minimalistic 4D-parallelism distributed training framework for education purpose☆724Updated this week
- CUDA Learning guide☆323Updated 7 months ago
- PyTorch native quantization and sparsity for training and inference☆1,842Updated this week
- ☆123Updated 6 months ago
- Make PyTorch models up to 40% faster! Thunder is a source to source compiler for PyTorch. It enables using different hardware executors a…☆1,280Updated this week
- ☆142Updated last year
- Pipeline Parallelism for PyTorch☆749Updated 5 months ago
- An implementation of the transformer architecture onto an Nvidia CUDA kernel☆169Updated last year
- ☆416Updated 4 months ago
- A PyTorch native library for large model training☆3,313Updated this week
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs…☆2,182Updated this week
- Step-by-step optimization of CUDA SGEMM☆284Updated 2 years ago
- An open-source efficient deep learning framework/compiler, written in python.☆681Updated last week
- Awesome resources for GPUs☆546Updated last year
- Slides, notes, and materials for the workshop☆316Updated 8 months ago
- Learn CUDA Programming, published by Packt☆1,100Updated last year
- ☆359Updated 7 months ago