maltanar / gemmbitserialLinks
Fast matrix multiplication for few-bit integer matrices on CPUs.
☆28Updated 6 years ago
Alternatives and similar repositories for gemmbitserial
Users that are interested in gemmbitserial are comparing it to the libraries listed below
Sorting:
- Quantize weights and activations in Recurrent Neural Networks.☆94Updated 7 years ago
- Simple Training and Deployment of Fast End-to-End Binary Networks☆159Updated 3 years ago
- CUDA templates for tile-sparse matrix multiplication based on CUTLASS.☆50Updated 7 years ago
- This is a PyTorch implementation of the Scalpel. Node pruning for five benchmark networks and SIMD-aware weight pruning for LeNet-300-100…☆41Updated 6 years ago
- ☆47Updated 5 years ago
- Training deep neural networks with low precision multiplications☆64Updated 10 years ago
- Test winograd convolution written in TVM for CUDA and AMDGPU☆41Updated 6 years ago
- LCNN: Lookup-based Convolutional Neural Network☆52Updated 7 years ago
- An exploration of log domain "alternative floating point" for hardware ML/AI accelerators.☆394Updated 2 years ago
- Codebase associated with the PyTorch compiler tutorial☆46Updated 6 years ago
- Implementation of convolution layer in different flavors☆68Updated 8 years ago
- Low Precision Arithmetic Simulation in PyTorch☆285Updated last year
- Training neural networks in TensorFlow 2.0 with 5x less memory☆136Updated 3 years ago
- ☆68Updated 2 years ago
- XLA integration of Open Neural Network Exchange (ONNX)☆19Updated 7 years ago
- GEMM and Winograd based convolutions using CUTLASS☆28Updated 5 years ago
- Chameleon: Adaptive Code Optimization for Expedited Deep Neural Network Compilation☆27Updated 5 years ago
- Code for paper "SWALP: Stochastic Weight Averaging forLow-Precision Training".☆62Updated 6 years ago
- Implementation of ICLR 2018 paper "Loss-aware Weight Quantization of Deep Networks"☆26Updated 5 years ago
- The official, proof-of-concept C++ implementation of PocketNN.☆35Updated 2 weeks ago
- ColTraIn HBFP Training Emulator☆16Updated 2 years ago
- Implementation of "NITI: Training Integer Neural Networks Using Integer-only Arithmetic" on arxiv☆86Updated 3 years ago
- Fast sparse deep learning on CPUs☆56Updated 3 years ago
- int8_t and int16_t matrix multiply based on https://arxiv.org/abs/1705.01991☆74Updated last year
- Efficient Sparse-Winograd Convolutional Neural Networks (ICLR 2018)☆193Updated 6 years ago
- TVM stack: exploring the incredible explosion of deep-learning frameworks and how to bring them together☆64Updated 7 years ago
- This is a collection of works on neural networks and neural accelerators.☆41Updated 6 years ago
- PyProf2: PyTorch Profiling tool☆82Updated 5 years ago
- Highly optimized inference engine for Binarized Neural Networks☆251Updated 3 weeks ago
- ☆54Updated 7 years ago