MekkCyber / CutlassAcademy
A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS
☆153Updated this week
Alternatives and similar repositories for CutlassAcademy:
Users that are interested in CutlassAcademy are comparing it to the libraries listed below
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆167Updated this week
- Cataloging released Triton kernels.☆208Updated 2 months ago
- Fast low-bit matmul kernels in Triton☆267Updated this week
- Applied AI experiments and examples for PyTorch☆250Updated this week
- ☆191Updated this week
- Fastest kernels written from scratch☆199Updated 2 weeks ago
- Fast Matrix Multiplications for Lookup Table-Quantized LLMs☆234Updated last month
- ☆151Updated last year
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆127Updated last year
- This repository contains the experimental PyTorch native float8 training UX☆222Updated 7 months ago
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark with Torch -> CUDA problems☆237Updated last week
- A curated list of resources for learning and exploring Triton, OpenAI's programming language for writing efficient GPU code.☆306Updated 2 weeks ago
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆523Updated last month
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆189Updated this week
- ☆138Updated 2 months ago
- ring-attention experiments☆128Updated 5 months ago
- Collection of kernels written in Triton language☆114Updated last month
- ☆191Updated 8 months ago
- ☆82Updated last week
- extensible collectives library in triton☆84Updated 6 months ago
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆233Updated this week
- LLM KV cache compression made easy☆442Updated this week
- Efficient LLM Inference over Long Sequences☆365Updated last month
- Dynamic Memory Management for Serving LLMs without PagedAttention☆326Updated this week
- ☆73Updated 4 months ago
- The simplest but fast implementation of matrix multiplication in CUDA.☆34Updated 7 months ago
- ☆101Updated 6 months ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆240Updated 4 months ago
- Learning about CUDA by writing PTX code.☆124Updated last year
- ☆57Updated 2 months ago