pnnl / TCBNN
☆31Updated 2 years ago
Related projects: ⓘ
- Benchmark for matrix multiplications between dense and block sparse (BSR) matrix in TVM, blocksparse (Gray et al.) and cuSparse.☆24Updated 4 years ago
- ☆17Updated 3 years ago
- Singular Binarized Neural Network based on GPU Bit Operations (see our SC-19 paper)☆12Updated 3 years ago
- Artifact repository for paper Automatic Generation of High-Performance Quantized Machine Learning Kernels☆17Updated 3 years ago
- Sparse kernels for GNNs based on TVM☆14Updated 3 years ago
- ☆39Updated 3 years ago
- Multi-target compiler for Sum-Product Networks, based on MLIR and LLVM.☆22Updated 4 months ago
- Code for paper "Design Principles for Sparse Matrix Multiplication on the GPU" accepted to Euro-Par 2018☆70Updated 3 years ago
- Chameleon: Adaptive Code Optimization for Expedited Deep Neural Network Compilation☆26Updated 4 years ago
- ☆38Updated 4 years ago
- ☆20Updated last year
- GPTPU for SC 2021☆46Updated last year
- CUDA templates for tile-sparse matrix multiplication based on CUTLASS.☆48Updated 6 years ago
- A Winograd Minimal Filter Implementation in CUDA☆20Updated 3 years ago
- ☆73Updated 5 months ago
- Implementation of TSM2L and TSM2R -- High-Performance Tall-and-Skinny Matrix-Matrix Multiplication Algorithms for CUDA☆31Updated 4 years ago
- Code base for OOPSLA'24 paper: UniSparse: An Intermediate Language for General Sparse Format Customization☆28Updated 3 months ago
- mixed-precision quantization for LLMs☆12Updated 10 months ago
- ☆15Updated 2 years ago
- Mille Crepe Bench: layer-wise performance analysis for deep learning frameworks.☆17Updated 4 years ago
- ☆23Updated 8 months ago
- Benchmark PyTorch Custom Operators☆13Updated last year
- Repository for artifact evaluation of ASPLOS 2023 paper "SparseTIR: Composable Abstractions for Sparse Compilation in Deep Learning"☆23Updated last year
- GEMM and Winograd based convolutions using CUTLASS☆24Updated 4 years ago
- MLIRX is now defunct. Please see PolyBlocks - https://docs.polymagelabs.com☆38Updated 9 months ago
- Automatic Mapping Generation, Verification, and Exploration for ISA-based Spatial Accelerators☆100Updated last year
- Magicube is a high-performance library for quantized sparse matrix operations (SpMM and SDDMM) of deep learning on Tensor Cores.☆79Updated last year
- ☆15Updated last week
- The code for paper: Neuralpower: Predict and deploy energy-efficient convolutional neural networks☆19Updated 5 years ago
- Escoin: Efficient Sparse Convolutional Neural Network Inference on GPUs☆15Updated 5 years ago