gpu-mode / resource-stream
GPU programming related news and material links
☆1,208Updated last month
Related projects ⓘ
Alternatives and complementary repositories for resource-stream
- Puzzles for learning Triton☆1,089Updated last month
- Tile primitives for speedy kernels☆1,643Updated this week
- Material for gpu-mode lectures☆2,967Updated this week
- An ML Systems Onboarding list☆540Updated 3 months ago
- Flash Attention in ~100 lines of CUDA (forward pass only)☆615Updated 7 months ago
- UNet diffusion model in pure CUDA☆567Updated 4 months ago
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆479Updated 2 weeks ago
- What would you do with 1000 H100s...☆895Updated 10 months ago
- Slides, notes, and materials for the workshop☆305Updated 5 months ago
- Fast CUDA matrix multiplication from scratch☆471Updated 10 months ago
- Mirage: Automatically Generating Fast GPU Kernels without Programming in Triton/CUDA☆601Updated last week
- Training materials associated with NVIDIA's CUDA Training Series (www.olcf.ornl.gov/cuda-training-series/)☆604Updated 2 months ago
- ☆133Updated 9 months ago
- Building blocks for foundation models.☆388Updated 10 months ago
- FlashInfer: Kernel Library for LLM Serving☆1,399Updated this week
- Make PyTorch models up to 40% faster! Thunder is a source to source compiler for PyTorch. It enables using different hardware executors a…☆1,190Updated this week
- ☆388Updated 3 weeks ago
- Annotated version of the Mamba paper☆455Updated 8 months ago
- Pipeline Parallelism for PyTorch☆725Updated 2 months ago
- Efficient implementations of state-of-the-art linear attention models in Pytorch and Triton☆1,325Updated this week
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs…☆1,955Updated this week
- The Tensor (or Array)☆408Updated 3 months ago
- An open-source efficient deep learning framework/compiler, written in python.☆649Updated this week
- PyTorch native quantization and sparsity for training and inference☆1,549Updated this week
- Helpful tools and examples for working with flex-attention☆462Updated 2 weeks ago
- Alex Krizhevsky's original code from Google Code☆188Updated 8 years ago
- Puzzles for exploring transformers☆323Updated last year
- A native PyTorch Library for large model training☆2,586Updated last week
- An implementation of the transformer architecture onto an Nvidia CUDA kernel☆157Updated last year
- Deep learning for dummies. All the practical details and useful utilities that go into working with real models.☆711Updated last month