ceruleangu / Block-Sparse-BenchmarkLinks
Benchmark for matrix multiplications between dense and block sparse (BSR) matrix in TVM, blocksparse (Gray et al.) and cuSparse.
☆24Updated 4 years ago
Alternatives and similar repositories for Block-Sparse-Benchmark
Users that are interested in Block-Sparse-Benchmark are comparing it to the libraries listed below
Sorting:
- ☆13Updated 3 years ago
- Chameleon: Adaptive Code Optimization for Expedited Deep Neural Network Compilation☆27Updated 5 years ago
- Repository for artifact evaluation of ASPLOS 2023 paper "SparseTIR: Composable Abstractions for Sparse Compilation in Deep Learning"☆25Updated 2 years ago
- MAGIS: Memory Optimization via Coordinated Graph Transformation and Scheduling for DNN (ASPLOS'24)☆52Updated last year
- This is the implementation for paper: AdaTune: Adaptive Tensor Program CompilationMade Efficient (NeurIPS 2020).☆14Updated 4 years ago
- Singular Binarized Neural Network based on GPU Bit Operations (see our SC-19 paper)☆15Updated 4 years ago
- An extention of TVMScript to write simple and high performance GPU kernels with tensorcore.☆50Updated 11 months ago
- DietCode Code Release☆64Updated 2 years ago
- ☆41Updated last year
- Artifacts of EVT ASPLOS'24☆26Updated last year
- ☆19Updated 3 years ago
- ☆18Updated 4 years ago
- ☆39Updated 5 years ago
- ☆14Updated 3 years ago
- ☆40Updated 3 years ago
- ☆31Updated 2 years ago
- Benchmark PyTorch Custom Operators☆14Updated last year
- ☆22Updated 2 years ago
- Automatic Mapping Generation, Verification, and Exploration for ISA-based Spatial Accelerators☆112Updated 2 years ago
- Tacker: Tensor-CUDA Core Kernel Fusion for Improving the GPU Utilization while Ensuring QoS☆27Updated 4 months ago
- ☆23Updated 6 months ago
- SparseTIR: Sparse Tensor Compiler for Deep Learning☆138Updated 2 years ago
- Sparse kernels for GNNs based on TVM☆17Updated 4 years ago
- ☆31Updated last year
- Implementation of TSM2L and TSM2R -- High-Performance Tall-and-Skinny Matrix-Matrix Multiplication Algorithms for CUDA☆32Updated 4 years ago
- Magicube is a high-performance library for quantized sparse matrix operations (SpMM and SDDMM) of deep learning on Tensor Cores.☆89Updated 2 years ago
- Workload-Aware Co-Optimization☆8Updated 2 years ago
- study of Ampere' Sparse Matmul☆18Updated 4 years ago
- ASPLOS'24: Optimal Kernel Orchestration for Tensor Programs with Korch☆37Updated 2 months ago
- ☆22Updated 6 years ago