HabanaAI / Habana_Custom_KernelLinks
Provides the examples to write and build Habana custom kernels using the HabanaTools
☆22Updated 5 months ago
Alternatives and similar repositories for Habana_Custom_Kernel
Users that are interested in Habana_Custom_Kernel are comparing it to the libraries listed below
Sorting:
- SYCL* Templates for Linear Algebra (SYCL*TLA) - SYCL based CUTLASS implementation for Intel GPUs☆40Updated this week
- ☆108Updated last year
- ☆50Updated 6 years ago
- Artifacts of EVT ASPLOS'24☆26Updated last year
- Fast GPU based tensor core reductions☆13Updated 2 years ago
- Mille Crepe Bench: layer-wise performance analysis for deep learning frameworks.☆17Updated 5 years ago
- Magicube is a high-performance library for quantized sparse matrix operations (SpMM and SDDMM) of deep learning on Tensor Cores.☆89Updated 2 years ago
- ☆83Updated 2 years ago
- Test suite for probing the numerical behavior of NVIDIA tensor cores☆41Updated last year
- Automatic Mapping Generation, Verification, and Exploration for ISA-based Spatial Accelerators☆115Updated 2 years ago
- Implementation of TSM2L and TSM2R -- High-Performance Tall-and-Skinny Matrix-Matrix Multiplication Algorithms for CUDA☆35Updated 5 years ago
- ☆39Updated 5 years ago
- ☆32Updated 3 years ago
- Code for paper "Design Principles for Sparse Matrix Multiplication on the GPU" accepted to Euro-Par 2018☆73Updated 5 years ago
- Dissecting NVIDIA GPU Architecture☆106Updated 3 years ago
- SparseTIR: Sparse Tensor Compiler for Deep Learning☆139Updated 2 years ago
- Github mirror of trition-lang/triton repo.☆82Updated this week
- GVProf: A Value Profiler for GPU-based Clusters☆52Updated last year
- ☆63Updated 9 months ago
- ☆47Updated 4 years ago
- ☆23Updated 2 years ago
- Matrix Multiply-Accumulate with CUDA and WMMA( Tensor Core)☆143Updated 5 years ago
- ☆16Updated 2 years ago
- DietCode Code Release☆65Updated 3 years ago
- ☆33Updated last year
- Anatomy of High-Performance GEMM with Online Fault Tolerance on GPUs☆12Updated 6 months ago
- ☆41Updated last year
- GPU Performance Advisor☆65Updated 3 years ago
- FlashSparse significantly reduces the computation redundancy for unstructured sparsity (for SpMM and SDDMM) on Tensor Cores through a Swa…☆29Updated this week
- Assembler for NVIDIA Volta and Turing GPUs☆230Updated 3 years ago