andravin / wincnnLinks
Winograd minimal convolution algorithm generator for convolutional neural networks.
☆624Updated 5 years ago
Alternatives and similar repositories for wincnn
Users that are interested in wincnn are comparing it to the libraries listed below
Sorting:
- Efficient Sparse-Winograd Convolutional Neural Networks (ICLR 2018)☆193Updated 6 years ago
- Caffe for Sparse Convolutional Neural Network☆237Updated 2 years ago
- Ristretto: Caffe-based approximation of convolutional neural networks.☆289Updated 6 years ago
- Fast CUDA Kernels for ResNet Inference.☆182Updated 6 years ago
- Caffe Implementation for Incremental network quantization☆191Updated 7 years ago
- collection of works aiming at reducing model sizes or the ASIC/FPGA accelerator for machine learning☆565Updated last year
- An efficient framework for convolutional neural networks☆277Updated 2 years ago
- Caffe implementation of accurate low-precision neural networks☆119Updated 7 years ago
- (New version is out: https://github.com/hpi-xnor/BMXNet-v2) BMXNet: An Open-Source Binary Neural Network Implementation Based on MXNet☆351Updated 6 years ago
- TVM integration into PyTorch☆455Updated 5 years ago
- Neural network visualizer and analyzer☆164Updated 7 years ago
- Quantization of Convolutional Neural networks.☆249Updated last year
- BinaryNets in TensorFlow with XNOR GEMM op☆154Updated 8 years ago
- tophub autotvm log collections☆69Updated 2 years ago
- Optimizing Mobile Deep Learning on ARM GPU with TVM☆181Updated 7 years ago
- Low-precision matrix multiplication☆1,816Updated last year
- A CUDNN minimal deep learning training code sample using LeNet.☆269Updated 2 years ago
- Caffe for Sparse and Low-rank Deep Neural Networks☆381Updated 5 years ago
- Generate a quantization parameter file for ncnn framework int8 inference☆517Updated 5 years ago
- Automatic Schedule Exploration and Optimization Framework for Tensor Computations☆180Updated 3 years ago
- Winograd-based convolution implementation in OpenCL☆28Updated 8 years ago
- Deep Compression on AlexNet☆673Updated 3 years ago
- High performance Cross-platform Inference-engine, you could run Anakin on x86-cpu,arm, nv-gpu, amd-gpu,bitmain and cambricon devices.☆535Updated 3 years ago
- BLISlab: A Sandbox for Optimizing GEMM☆548Updated 4 years ago
- Caffe for Deep Compression☆239Updated 8 years ago
- Subpart source code of of deepcore v0.7☆27Updated 5 years ago
- Implementation of convolution layer in different flavors☆68Updated 8 years ago
- heterogeneity-aware-lowering-and-optimization☆256Updated last year
- Quantized Neural Networks - networks trained for inference at arbitrary low precision.☆147Updated 8 years ago
- Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1☆307Updated 4 years ago