maltanar / gemmbitserialLinks
Fast matrix multiplication for few-bit integer matrices on CPUs.
☆28Updated 6 years ago
Alternatives and similar repositories for gemmbitserial
Users that are interested in gemmbitserial are comparing it to the libraries listed below
Sorting:
- Quantize weights and activations in Recurrent Neural Networks.☆95Updated 7 years ago
- Simple Training and Deployment of Fast End-to-End Binary Networks☆158Updated 3 years ago
- CUDA templates for tile-sparse matrix multiplication based on CUTLASS.☆50Updated 7 years ago
- Test winograd convolution written in TVM for CUDA and AMDGPU☆41Updated 7 years ago
- Training deep neural networks with low precision multiplications☆64Updated 10 years ago
- Implementation of convolution layer in different flavors☆68Updated 8 years ago
- An exploration of log domain "alternative floating point" for hardware ML/AI accelerators.☆397Updated 2 years ago
- ☆47Updated 5 years ago
- This is a PyTorch implementation of the Scalpel. Node pruning for five benchmark networks and SIMD-aware weight pruning for LeNet-300-100…☆41Updated 7 years ago
- Implementation of "NITI: Training Integer Neural Networks Using Integer-only Arithmetic" on arxiv☆86Updated 3 years ago
- Codebase associated with the PyTorch compiler tutorial☆47Updated 6 years ago
- ☆68Updated 2 years ago
- LCNN: Lookup-based Convolutional Neural Network☆52Updated 8 years ago
- Low Precision Arithmetic Simulation in PyTorch☆286Updated last year
- ☆54Updated 7 years ago
- Implementation of ICLR 2018 paper "Loss-aware Weight Quantization of Deep Networks"☆27Updated 6 years ago
- This is a collection of works on neural networks and neural accelerators.☆41Updated 6 years ago
- Graph Transforms to Quantize and Retrain Deep Neural Nets in TensorFlow☆168Updated 5 years ago
- Training neural networks in TensorFlow 2.0 with 5x less memory☆137Updated 3 years ago
- This repository containts the pytorch scripts to train mixed-precision networks for microcontroller deployment, based on the memory contr…☆50Updated last year
- Chameleon: Adaptive Code Optimization for Expedited Deep Neural Network Compilation☆27Updated 6 years ago
- Highly optimized inference engine for Binarized Neural Networks☆251Updated 2 weeks ago
- Kernel Fusion and Runtime Compilation Based on NNVM☆72Updated 9 years ago
- GEMM and Winograd based convolutions using CUTLASS☆28Updated 5 years ago
- DLPack for Tensorflow☆35Updated 5 years ago
- An analytical performance modeling tool for deep neural networks.☆91Updated 5 years ago
- Implementation for the paper "Latent Weights Do Not Exist: Rethinking Binarized Neural Network Optimization"☆74Updated 5 years ago
- TVM stack: exploring the incredible explosion of deep-learning frameworks and how to bring them together☆64Updated 7 years ago
- Official implementation of "Searching for Winograd-aware Quantized Networks" (MLSys'20)☆27Updated 2 years ago
- Efficient Sparse-Winograd Convolutional Neural Networks (ICLR 2018)☆193Updated 6 years ago