linjames0 / Transformer-CUDALinks
An implementation of the transformer architecture onto an Nvidia CUDA kernel
☆185Updated last year
Alternatives and similar repositories for Transformer-CUDA
Users that are interested in Transformer-CUDA are comparing it to the libraries listed below
Sorting:
- ☆159Updated last year
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆134Updated last year
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆556Updated this week
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆186Updated last month
- Cataloging released Triton kernels.☆236Updated 5 months ago
- Fastest kernels written from scratch☆281Updated 2 months ago
- ☆219Updated this week
- ☆88Updated last year
- Fast low-bit matmul kernels in Triton☆322Updated this week
- ring-attention experiments☆144Updated 8 months ago
- Collection of kernels written in Triton language☆128Updated 2 months ago
- Solve puzzles. Learn CUDA.☆64Updated last year
- Applied AI experiments and examples for PyTorch☆277Updated 3 weeks ago
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆184Updated 3 weeks ago
- Learning about CUDA by writing PTX code.☆132Updated last year
- ☆172Updated 5 months ago
- Alex Krizhevsky's original code from Google Code☆192Updated 9 years ago
- This repository contains the experimental PyTorch native float8 training UX☆224Updated 10 months ago
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆67Updated 2 months ago
- UNet diffusion model in pure CUDA☆608Updated 11 months ago
- CUDA Matrix Multiplication Optimization☆194Updated 11 months ago
- Simple Byte pair Encoding mechanism used for tokenization process . written purely in C☆134Updated 7 months ago
- The simplest but fast implementation of matrix multiplication in CUDA.☆35Updated 10 months ago
- High-Performance SGEMM on CUDA devices☆95Updated 5 months ago
- Step-by-step optimization of CUDA SGEMM☆339Updated 3 years ago
- a minimal cache manager for PagedAttention, on top of llama3.☆91Updated 9 months ago
- ☆317Updated this week
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark with Torch -> CUDA problems☆415Updated 3 weeks ago
- ☆212Updated 11 months ago
- Reference Kernels for the Leaderboard☆59Updated this week