andravin / wincnnLinks
Winograd minimal convolution algorithm generator for convolutional neural networks.
☆622Updated 4 years ago
Alternatives and similar repositories for wincnn
Users that are interested in wincnn are comparing it to the libraries listed below
Sorting:
- Efficient Sparse-Winograd Convolutional Neural Networks (ICLR 2018)☆193Updated 6 years ago
- Caffe for Sparse Convolutional Neural Network☆237Updated 2 years ago
- Ristretto: Caffe-based approximation of convolutional neural networks.☆290Updated 6 years ago
- collection of works aiming at reducing model sizes or the ASIC/FPGA accelerator for machine learning☆562Updated last year
- Fast CUDA Kernels for ResNet Inference.☆180Updated 6 years ago
- Caffe Implementation for Incremental network quantization☆191Updated 7 years ago
- Deep Compression on AlexNet☆672Updated 3 years ago
- Quantization of Convolutional Neural networks.☆245Updated last year
- Caffe implementation of accurate low-precision neural networks☆118Updated 6 years ago
- A CUDNN minimal deep learning training code sample using LeNet.☆268Updated 2 years ago
- Neural network visualizer and analyzer☆164Updated 6 years ago
- Caffe for Sparse and Low-rank Deep Neural Networks☆381Updated 5 years ago
- BLISlab: A Sandbox for Optimizing GEMM☆538Updated 4 years ago
- An efficient framework for convolutional neural networks☆277Updated 2 years ago
- (New version is out: https://github.com/hpi-xnor/BMXNet-v2) BMXNet: An Open-Source Binary Neural Network Implementation Based on MXNet☆350Updated 5 years ago
- Graph Transforms to Quantize and Retrain Deep Neural Nets in TensorFlow☆168Updated 5 years ago
- Low-precision matrix multiplication☆1,815Updated last year
- BinaryNets in TensorFlow with XNOR GEMM op☆154Updated 8 years ago
- Generate a quantization parameter file for ncnn framework int8 inference☆517Updated 5 years ago
- tophub autotvm log collections☆69Updated 2 years ago
- Optimizing Mobile Deep Learning on ARM GPU with TVM☆181Updated 6 years ago
- Winograd-based convolution implementation in OpenCL☆28Updated 8 years ago
- Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1☆304Updated 3 years ago
- High performance Cross-platform Inference-engine, you could run Anakin on x86-cpu,arm, nv-gpu, amd-gpu,bitmain and cambricon devices.☆534Updated 3 years ago
- LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks☆242Updated 3 years ago
- Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1☆1,061Updated 6 years ago
- TVM integration into PyTorch☆454Updated 5 years ago
- Caffe for Deep Compression☆239Updated 7 years ago
- Quantized Neural Networks - networks trained for inference at arbitrary low precision.☆147Updated 7 years ago
- Subpart source code of of deepcore v0.7☆27Updated 5 years ago