pnnl / TCBNNLinks
☆34Updated 2 years ago
Alternatives and similar repositories for TCBNN
Users that are interested in TCBNN are comparing it to the libraries listed below
Sorting:
- ☆17Updated 3 years ago
- Official implementation of "Searching for Winograd-aware Quantized Networks" (MLSys'20)☆27Updated last year
- Singular Binarized Neural Network based on GPU Bit Operations (see our SC-19 paper)☆15Updated 4 years ago
- Benchmark for matrix multiplications between dense and block sparse (BSR) matrix in TVM, blocksparse (Gray et al.) and cuSparse.☆24Updated 4 years ago
- ☆38Updated 5 years ago
- GEMM and Winograd based convolutions using CUTLASS☆26Updated 4 years ago
- Multi-target compiler for Sum-Product Networks, based on MLIR and LLVM.☆23Updated 6 months ago
- CUDA templates for tile-sparse matrix multiplication based on CUTLASS.☆51Updated 7 years ago
- ☆44Updated 4 years ago
- ☆18Updated 3 years ago
- A Winograd Minimal Filter Implementation in CUDA☆24Updated 3 years ago
- Code for paper "Design Principles for Sparse Matrix Multiplication on the GPU" accepted to Euro-Par 2018☆71Updated 4 years ago
- Chameleon: Adaptive Code Optimization for Expedited Deep Neural Network Compilation☆27Updated 5 years ago
- Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming☆97Updated 3 years ago
- GoldenEye is a functional simulator with fault injection capabilities for common and emerging numerical formats, implemented for the PyTo…☆25Updated 7 months ago
- Escoin: Efficient Sparse Convolutional Neural Network Inference on GPUs☆16Updated 6 years ago
- PyTorch extension for emulating FP8 data formats on standard FP32 Xeon/GPU hardware.☆110Updated 6 months ago
- Implementation of TSM2L and TSM2R -- High-Performance Tall-and-Skinny Matrix-Matrix Multiplication Algorithms for CUDA☆32Updated 4 years ago
- Post-training sparsity-aware quantization☆34Updated 2 years ago
- ☆34Updated 4 years ago
- Official implementation of EMNLP'23 paper "Revisiting Block-based Quantisation: What is Important for Sub-8-bit LLM Inference?"☆22Updated last year
- Code for ICML 2021 submission☆34Updated 4 years ago
- BSQ: Exploring Bit-Level Sparsity for Mixed-Precision Neural Network Quantization (ICLR 2021)☆40Updated 4 years ago
- ☆18Updated 4 years ago
- Repository for artifact evaluation of ASPLOS 2023 paper "SparseTIR: Composable Abstractions for Sparse Compilation in Deep Learning"☆25Updated 2 years ago
- Simulator for BitFusion☆100Updated 4 years ago
- ☆40Updated 3 years ago
- ☆31Updated 2 years ago
- Artifact repository for paper Automatic Generation of High-Performance Quantized Machine Learning Kernels☆17Updated 4 years ago
- ☆22Updated 2 years ago