siboehm / ShallowSpeedLinks
Small scale distributed training of sequential deep learning models, built on Numpy and MPI.
☆155Updated 2 years ago
Alternatives and similar repositories for ShallowSpeed
Users that are interested in ShallowSpeed are comparing it to the libraries listed below
Sorting:
- ☆286Updated this week
- ☆178Updated 2 years ago
- Cataloging released Triton kernels.☆292Updated 5 months ago
- Fast low-bit matmul kernels in Triton☆427Updated last week
- ring-attention experiments☆165Updated last year
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆251Updated 9 months ago
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆198Updated 8 months ago
- Learn CUDA with PyTorch☆193Updated last week
- Applied AI experiments and examples for PyTorch☆315Updated 5 months ago
- Official Problem Sets / Reference Kernels for the GPU MODE Leaderboard!☆201Updated this week
- Write a fast kernel and run it on Discord. See how you compare against the best!☆68Updated last week
- An implementation of the transformer architecture onto an Nvidia CUDA kernel☆202Updated 2 years ago
- extensible collectives library in triton☆95Updated 10 months ago
- Custom kernels in Triton language for accelerating LLMs☆27Updated last year
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆595Updated 5 months ago
- Quantized LLM training in pure CUDA/C++.☆235Updated 2 weeks ago
- Learning about CUDA by writing PTX code.☆152Updated last year
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 10 months ago
- Collection of kernels written in Triton language☆178Updated last week
- seqax = sequence modeling + JAX☆170Updated 6 months ago
- Ship correct and fast LLM kernels to PyTorch☆140Updated 3 weeks ago
- MoE training for Me and You and maybe other people☆335Updated last month
- ☆79Updated 2 years ago
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆219Updated last week
- A Quirky Assortment of CuTe Kernels☆781Updated this week
- A Python-embedded DSL that makes it easy to write fast, scalable ML kernels with minimal boilerplate.☆739Updated this week
- High-Performance FP32 GEMM on CUDA devices☆117Updated last year
- This repository contains the experimental PyTorch native float8 training UX☆227Updated last year
- A bunch of kernels that might make stuff slower 😉☆75Updated this week
- Fastest kernels written from scratch☆532Updated 4 months ago