NVIDIA / cutlassLinks
CUDA Templates for Linear Algebra Subroutines
☆7,688Updated this week
Alternatives and similar repositories for cutlass
Users that are interested in cutlass are comparing it to the libraries listed below
Sorting:
- Optimized primitives for collective multi-GPU communication☆3,789Updated 2 weeks ago
- CUDA Core Compute Libraries☆1,689Updated this week
- CUDA Library Samples☆1,977Updated last week
- A machine learning compiler for GPUs, CPUs, and ML accelerators☆3,243Updated this week
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper, Ada and Bla…☆2,476Updated this week
- [ARCHIVED] Cooperative primitives for CUDA C++. See https://github.com/NVIDIA/cccl☆1,752Updated last year
- Transformer related optimization, including BERT, GPT☆6,200Updated last year
- FlashInfer: Kernel Library for LLM Serving☆3,170Updated this week
- Samples for CUDA Developers which demonstrates features in CUDA Toolkit☆7,598Updated 3 weeks ago
- how to optimize some algorithm in cuda.☆2,262Updated this week
- CUDA Python: Performance meets Productivity☆2,767Updated this week
- Tile primitives for speedy kernels☆2,457Updated this week
- Development repository for the Triton language and compiler☆15,844Updated this week
- Fast and memory-efficient exact attention☆17,846Updated this week
- A Datacenter Scale Distributed Inference Serving Framework☆4,254Updated this week
- Reference implementations of MLPerf™ inference benchmarks☆1,397Updated this week
- ☆2,458Updated last year
- Material for gpu-mode lectures☆4,589Updated 4 months ago
- This is a series of GPU optimization topics. Here we will introduce how to optimize the CUDA kernel in detail. I will introduce several…☆1,066Updated last year
- TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and support state-of-the-art optimizati…☆10,734Updated this week
- Ongoing research training transformer models at scale☆12,564Updated this week
- Sample codes for my CUDA programming book☆1,732Updated 4 months ago
- PyTorch extensions for high performance and large scale training.☆3,330Updated last month
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.☆9,347Updated this week
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆12,370Updated this week
- FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/☆1,376Updated this week
- 📚LeetCUDA: 200+ CUDA/Tensor Cores Kernels, HGEMM, FA-2 MMA.☆4,789Updated this week
- Examples demonstrating available options to program multiple GPUs in a single node or a cluster☆730Updated 3 months ago
- The Torch-MLIR project aims to provide first class support from the PyTorch ecosystem to the MLIR ecosystem.☆1,562Updated this week
- GPU programming related news and material links☆1,571Updated 5 months ago