Source code of the PPoPP '22 paper: "TileSpGEMM: A Tiled Algorithm for Parallel Sparse General Matrix-Matrix Multiplication on GPUs" by Yuyao Niu, Zhengyang Lu, Haonan Ji, Shuhui Song, Zhou Jin, and Weifeng Liu.
☆46May 22, 2024Updated last year
Alternatives and similar repositories for TileSpGEMM
Users that are interested in TileSpGEMM are comparing it to the libraries listed below
Sorting:
- Source code of the IPDPS '21 paper: "TileSpMV: A Tiled Algorithm for Sparse Matrix-Vector Multiplication on GPUs" by Yuyao Niu, Zhengyang…☆12Aug 12, 2022Updated 3 years ago
- Efficient SpGEMM on GPU using CUDA and CSR☆59Jul 18, 2023Updated 2 years ago
- ☆16Nov 22, 2022Updated 3 years ago
- CSR-based SpGEMM on nVidia and AMD GPUs☆47Apr 9, 2016Updated 9 years ago
- ☆46Jun 19, 2024Updated last year
- ☆112Jul 3, 2021Updated 4 years ago
- Fast GPU based tensor core reductions☆13Jan 13, 2023Updated 3 years ago
- A GPU algorithm for sparse matrix-matrix multiplication☆75Oct 1, 2020Updated 5 years ago
- ☆43May 21, 2021Updated 4 years ago
- Code for High Performance Unstructured SpMM Computation Using Tensor Cores☆33Nov 3, 2024Updated last year
- CUDA Sparse-Matrix Vector Multiplication, using Sliced Coordinate format☆22Jun 8, 2018Updated 7 years ago
- GPU implementation of Winograd convolution☆10Oct 23, 2017Updated 8 years ago
- Arrow Matrix Decomposition - Communication-Efficient Distributed Sparse Matrix Multiplication☆15Mar 25, 2024Updated last year
- CSR5-based SpMV on CPUs, GPUs and Xeon Phi☆110Jun 10, 2024Updated last year
- A Row Decomposition-based Approach for Sparse Matrix Multiplication on GPUs☆28Nov 29, 2023Updated 2 years ago
- ☆12May 19, 2022Updated 3 years ago
- A intelligent matrix format designer for SpMV☆10Oct 10, 2023Updated 2 years ago
- Artifact for USENIX ATC'23: TC-GNN: Bridging Sparse GNN Computation and Dense Tensor Cores on GPUs.☆53Oct 16, 2023Updated 2 years ago
- Source code of the paper "OpSparse: a Highly Optimized Framework for Sparse General Matrix Multiplication on GPUs"☆15Aug 23, 2022Updated 3 years ago
- Magicube is a high-performance library for quantized sparse matrix operations (SpMM and SDDMM) of deep learning on Tensor Cores.☆91Nov 23, 2022Updated 3 years ago
- Escoin: Efficient Sparse Convolutional Neural Network Inference on GPUs☆16Feb 28, 2019Updated 7 years ago
- Mirror of http://gitlab.hpcrl.cse.ohio-state.edu/chong/ppopp19_ae, refactoring for understanding☆15Oct 20, 2021Updated 4 years ago
- New batched algorithm for sparse matrix-matrix multiplication (SpMM)☆16May 7, 2019Updated 6 years ago
- ☆36Apr 20, 2021Updated 4 years ago
- Sparse matrix computation library for GPU☆59Jul 12, 2020Updated 5 years ago
- The simulator for SPADA, an SpGEMM accelerator with adaptive dataflow☆47Jan 26, 2023Updated 3 years ago
- ☆19Aug 26, 2021Updated 4 years ago
- A highly efficient library for GEMM operations on Sunway TaihuLight☆18Sep 7, 2020Updated 5 years ago
- Simple example of how to write an Implicit GEMM Convolution in CUDA using the tensor core WMMA API and bindings for PyTorch.☆18Jun 29, 2023Updated 2 years ago
- SpMV using CUDA☆20Mar 5, 2018Updated 7 years ago
- ☆50Jun 27, 2019Updated 6 years ago
- Source code of the SC '23 paper: "DASP: Specific Dense Matrix Multiply-Accumulate Units Accelerated General Sparse Matrix-Vector Multipli…☆28Jun 18, 2024Updated last year
- Code for paper "Engineering a High-Performance GPU B-Tree" accepted to PPoPP 2019☆58Jun 27, 2022Updated 3 years ago
- A Winograd Minimal Filter Implementation in CUDA☆28Aug 25, 2021Updated 4 years ago
- Artifacts of EVT ASPLOS'24☆29Mar 6, 2024Updated last year
- CSR-based SpMV on Heterogeneous Processors (Intel Broadwell, AMD Kaveri and nVidia Tegra K1)☆26May 12, 2015Updated 10 years ago
- PyTorch-Based Fast and Efficient Processing for Various Machine Learning Applications with Diverse Sparsity☆120Dec 22, 2025Updated 2 months ago
- Tacker: Tensor-CUDA Core Kernel Fusion for Improving the GPU Utilization while Ensuring QoS☆34Feb 10, 2025Updated last year
- A library of GPU kernels for sparse matrix operations.☆283Nov 24, 2020Updated 5 years ago