maltanar / gemmbitserialLinks
Fast matrix multiplication for few-bit integer matrices on CPUs.
☆28Updated 6 years ago
Alternatives and similar repositories for gemmbitserial
Users that are interested in gemmbitserial are comparing it to the libraries listed below
Sorting:
- Quantize weights and activations in Recurrent Neural Networks.☆95Updated 7 years ago
- Simple Training and Deployment of Fast End-to-End Binary Networks☆159Updated 3 years ago
- An exploration of log domain "alternative floating point" for hardware ML/AI accelerators.☆399Updated 2 years ago
- CUDA templates for tile-sparse matrix multiplication based on CUTLASS.☆50Updated 7 years ago
- ☆47Updated 5 years ago
- Training deep neural networks with low precision multiplications☆64Updated 10 years ago
- Codebase associated with the PyTorch compiler tutorial☆47Updated 6 years ago
- LCNN: Lookup-based Convolutional Neural Network☆52Updated 8 years ago
- This is a PyTorch implementation of the Scalpel. Node pruning for five benchmark networks and SIMD-aware weight pruning for LeNet-300-100…☆41Updated 7 years ago
- Test winograd convolution written in TVM for CUDA and AMDGPU☆41Updated 7 years ago
- Highly optimized inference engine for Binarized Neural Networks☆251Updated this week
- Kernel Fusion and Runtime Compilation Based on NNVM☆72Updated 9 years ago
- ☆68Updated 2 years ago
- Implementation of convolution layer in different flavors☆68Updated 8 years ago
- DLPack for Tensorflow☆35Updated 5 years ago
- Reference workloads for modern deep learning methods.☆73Updated 3 years ago
- Low Precision Arithmetic Simulation in PyTorch☆289Updated last year
- Library for fast image convolution in neural networks on Intel Architecture☆30Updated 8 years ago
- ☆29Updated 4 years ago
- This is a collection of works on neural networks and neural accelerators.☆41Updated 6 years ago
- ColTraIn HBFP Training Emulator☆16Updated 2 years ago
- image to column☆30Updated 11 years ago
- Chameleon: Adaptive Code Optimization for Expedited Deep Neural Network Compilation☆27Updated 6 years ago
- A self-contained version of the tutorial which can be easily cloned and viewed by others.☆24Updated 6 years ago
- int8_t and int16_t matrix multiply based on https://arxiv.org/abs/1705.01991☆74Updated 2 years ago
- Train neural networks with joint quantization and pruning on both weights and activations using any pytorch modules☆43Updated 3 years ago
- Implementation of ICLR 2018 paper "Loss-aware Weight Quantization of Deep Networks"☆27Updated 6 years ago
- Efficient Sparse-Winograd Convolutional Neural Networks (ICLR 2018)☆193Updated 6 years ago
- Official implementation of "Searching for Winograd-aware Quantized Networks" (MLSys'20)☆27Updated 2 years ago
- TVM stack: exploring the incredible explosion of deep-learning frameworks and how to bring them together☆64Updated 7 years ago