linjames0 / Transformer-CUDA
An implementation of the transformer architecture onto an Nvidia CUDA kernel
☆180Updated last year
Alternatives and similar repositories for Transformer-CUDA:
Users that are interested in Transformer-CUDA are comparing it to the libraries listed below
- ☆155Updated last year
- ☆202Updated last week
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆536Updated last week
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆130Updated last year
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆169Updated last month
- Cataloging released Triton kernels.☆220Updated 3 months ago
- Fastest kernels written from scratch☆256Updated last month
- Solve puzzles. Learn CUDA.☆64Updated last year
- Collection of kernels written in Triton language☆121Updated last month
- Applied AI experiments and examples for PyTorch☆264Updated last week
- Learning about CUDA by writing PTX code.☆128Updated last year
- Fast low-bit matmul kernels in Triton☆295Updated this week
- CUDA Matrix Multiplication Optimization☆184Updated 9 months ago
- This repository contains the experimental PyTorch native float8 training UX☆224Updated 9 months ago
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆180Updated last week
- High-Performance SGEMM on CUDA devices☆90Updated 3 months ago
- ☆102Updated last month
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆122Updated this week
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark with Torch -> CUDA problems☆288Updated last week
- Write a fast kernel and run it on Discord. See how you compare against the best!☆44Updated this week
- Experiment of using Tangent to autodiff triton☆78Updated last year
- ☆88Updated last year
- Step-by-step optimization of CUDA SGEMM☆314Updated 3 years ago
- Fast CUDA matrix multiplication from scratch☆707Updated last year
- ring-attention experiments☆132Updated 6 months ago
- ☆159Updated 4 months ago
- a minimal cache manager for PagedAttention, on top of llama3.☆85Updated 8 months ago
- The simplest but fast implementation of matrix multiplication in CUDA.☆34Updated 9 months ago
- A curated list of resources for learning and exploring Triton, OpenAI's programming language for writing efficient GPU code.☆342Updated last month
- ☆202Updated 9 months ago