NVIDIA / cudnn-frontend
cudnn_frontend provides a c++ wrapper for the cudnn backend API and samples on how to use it
☆502Updated 3 weeks ago
Alternatives and similar repositories for cudnn-frontend:
Users that are interested in cudnn-frontend are comparing it to the libraries listed below
- Composable Kernel: Performance Portable Programming Model for Machine Learning Tensor Operators☆349Updated this week
- A Fusion Code Generator for NVIDIA GPUs (commonly known as "nvFuser")☆303Updated this week
- CUDA Kernel Benchmarking Library☆561Updated 3 months ago
- The NVIDIA® Tools Extension SDK (NVTX) is a C-based Application Programming Interface (API) for annotating events, code ranges, and resou…☆347Updated this week
- Several optimization methods of half-precision general matrix multiplication (HGEMM) using tensor core with WMMA API and MMA PTX instruct…☆345Updated 5 months ago
- Representation and Reference Lowering of ONNX Models in MLIR Compiler Infrastructure☆812Updated this week
- Training material for Nsight developer tools☆148Updated 6 months ago
- Optimizing SGEMM kernel functions on NVIDIA GPUs to a close-to-cuBLAS performance.☆324Updated last month
- An unofficial cuda assembler, for all generations of SASS, hopefully :)☆426Updated last year
- A simple high performance CUDA GEMM implementation.☆346Updated last year
- The Torch-MLIR project aims to provide first class support from the PyTorch ecosystem to the MLIR ecosystem.☆1,424Updated this week
- AMD's graph optimization engine.☆208Updated this week
- TensorRT Model Optimizer is a unified library of state-of-the-art model optimization techniques such as quantization, pruning, distillati…☆708Updated last week
- CUDA Matrix Multiplication Optimization☆161Updated 7 months ago
- Experimental projects related to TensorRT☆89Updated this week
- Assembler for NVIDIA Volta and Turing GPUs☆212Updated 3 years ago
- Examples demonstrating available options to program multiple GPUs in a single node or a cluster☆610Updated 3 months ago
- Step-by-step optimization of CUDA SGEMM☆284Updated 2 years ago
- OpenAI Triton backend for Intel® GPUs☆165Updated this week
- Yinghan's Code Sample☆305Updated 2 years ago
- Shared Middle-Layer for Triton Compilation☆226Updated this week
- ☆406Updated this week
- Backward compatible ML compute opset inspired by HLO/MHLO☆446Updated last week
- ☆181Updated 7 months ago
- A Easy-to-understand TensorOp Matmul Tutorial☆316Updated 5 months ago
- collection of benchmarks to measure basic GPU capabilities☆296Updated last week
- Development repository for the Triton-Linalg conversion☆173Updated 2 weeks ago
- Stretching GPU performance for GEMMs and tensor contractions.☆233Updated this week
- A model compilation solution for various hardware☆405Updated this week
- FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/☆1,260Updated this week