NVIDIA / cutlass
CUDA Templates for Linear Algebra Subroutines
☆7,450Updated 2 weeks ago
Alternatives and similar repositories for cutlass
Users that are interested in cutlass are comparing it to the libraries listed below
Sorting:
- Optimized primitives for collective multi-GPU communication☆3,710Updated 2 weeks ago
- Development repository for the Triton language and compiler☆15,568Updated this week
- CUDA Core Compute Libraries☆1,636Updated this week
- CUDA Library Samples☆1,924Updated this week
- A machine learning compiler for GPUs, CPUs, and ML accelerators☆3,169Updated this week
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper, Ada and Bla…☆2,412Updated this week
- Transformer related optimization, including BERT, GPT☆6,152Updated last year
- FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/☆1,319Updated this week
- Tile primitives for speedy kernels☆2,339Updated this week
- [ARCHIVED] Cooperative primitives for CUDA C++. See https://github.com/NVIDIA/cccl☆1,746Updated last year
- FlashInfer: Kernel Library for LLM Serving☆2,966Updated this week
- how to optimize some algorithm in cuda.☆2,162Updated this week
- A retargetable MLIR-based machine learning compiler and runtime toolkit.☆3,132Updated this week
- Samples for CUDA Developers which demonstrates features in CUDA Toolkit☆7,446Updated last week
- CUDA Python: Performance meets Productivity☆2,637Updated this week
- The Torch-MLIR project aims to provide first class support from the PyTorch ecosystem to the MLIR ecosystem.☆1,529Updated this week
- This is a series of GPU optimization topics. Here we will introduce how to optimize the CUDA kernel in detail. I will introduce several…☆1,033Updated last year
- NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source compone…☆11,575Updated last week
- Material for gpu-mode lectures☆4,444Updated 3 months ago
- AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (N…☆4,633Updated last month
- 📚LeetCUDA: Modern CUDA Learn Notes with PyTorch for Beginners🐑, 200+ CUDA/Tensor Cores Kernels, HGEMM, FA-2 MMA etc.🔥☆4,205Updated this week
- Low-precision matrix multiplication☆1,803Updated last year
- PyTorch extensions for high performance and large scale training.☆3,317Updated 3 weeks ago
- Mirage: Automatically Generating Fast GPU Kernels without Programming in Triton/CUDA☆828Updated this week
- An efficient C++17 GPU numerical computing library with Python-like syntax☆1,321Updated last week
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆12,275Updated this week
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,749Updated this week
- Fast and memory-efficient exact attention☆17,346Updated last week
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels☆1,167Updated this week
- Collective communications library with various primitives for multi-machine training.☆1,302Updated this week