NVIDIA / TensorRT-Incubator
Experimental projects related to TensorRT
☆62Updated this week
Related projects: ⓘ
- Shared Middle-Layer for Triton Compilation☆160Updated this week
- Assembler for NVIDIA Volta and Turing GPUs☆195Updated 2 years ago
- collection of benchmarks to measure basic GPU capabilities☆241Updated 2 months ago
- PyTorch emulation library for Microscaling (MX)-compatible data formats☆143Updated last month
- A Easy-to-understand TensorOp Matmul Tutorial☆265Updated this week
- ☆138Updated 2 months ago
- OpenAI Triton backend for Intel® GPUs☆126Updated this week
- Several optimization methods of half-precision general matrix multiplication (HGEMM) using tensor core with WMMA API and MMA PTX instruct…☆266Updated last week
- ☆73Updated 5 months ago
- A home for the final text of all TVM RFCs.☆99Updated 3 months ago
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆233Updated this week
- ☆193Updated last year
- A fast communication-overlapping library for tensor parallelism on GPUs.☆184Updated this week
- Yinghan's Code Sample☆272Updated 2 years ago
- Composable Kernel: Performance Portable Programming Model for Machine Learning Tensor Operators☆293Updated this week
- Development repository for the Triton-Linalg conversion☆137Updated last month
- Dissecting NVIDIA GPU Architecture☆78Updated 2 years ago
- ☆95Updated 2 years ago
- Optimizing SGEMM kernel functions on NVIDIA GPUs to a close-to-cuBLAS performance.☆265Updated 2 years ago
- A simple high performance CUDA GEMM implementation.☆319Updated 8 months ago
- An extension library of WMMA API (Tensor Core API)☆81Updated 2 months ago
- CUDA Matrix Multiplication Optimization☆118Updated 2 months ago
- A library of GPU kernels for sparse matrix operations.☆240Updated 3 years ago
- Benchmark code for the "Online normalizer calculation for softmax" paper☆52Updated 6 years ago
- Step-by-step optimization of CUDA SGEMM☆207Updated 2 years ago
- ☆141Updated last year