andravin / wincnnLinks
Winograd minimal convolution algorithm generator for convolutional neural networks.
☆624Updated 5 years ago
Alternatives and similar repositories for wincnn
Users that are interested in wincnn are comparing it to the libraries listed below
Sorting:
- Efficient Sparse-Winograd Convolutional Neural Networks (ICLR 2018)☆193Updated 6 years ago
- Ristretto: Caffe-based approximation of convolutional neural networks.☆289Updated 6 years ago
- Fast CUDA Kernels for ResNet Inference.☆182Updated 6 years ago
- collection of works aiming at reducing model sizes or the ASIC/FPGA accelerator for machine learning☆565Updated last year
- Caffe implementation of accurate low-precision neural networks☆118Updated 7 years ago
- Caffe Implementation for Incremental network quantization☆191Updated 7 years ago
- Quantization of Convolutional Neural networks.☆248Updated last year
- tophub autotvm log collections☆69Updated 2 years ago
- BinaryNets in TensorFlow with XNOR GEMM op☆154Updated 8 years ago
- (New version is out: https://github.com/hpi-xnor/BMXNet-v2) BMXNet: An Open-Source Binary Neural Network Implementation Based on MXNet☆351Updated 6 years ago
- An efficient framework for convolutional neural networks☆278Updated 2 years ago
- Optimizing Mobile Deep Learning on ARM GPU with TVM☆181Updated 7 years ago
- Neural network visualizer and analyzer☆164Updated 7 years ago
- TVM integration into PyTorch☆456Updated 5 years ago
- Caffe for Sparse and Low-rank Deep Neural Networks☆383Updated 5 years ago
- Low-precision matrix multiplication☆1,821Updated last year
- A CUDNN minimal deep learning training code sample using LeNet.☆268Updated 2 years ago
- Winograd-based convolution implementation in OpenCL☆28Updated 8 years ago
- Deep Compression on AlexNet☆673Updated 3 years ago
- Subpart source code of of deepcore v0.7☆27Updated 5 years ago
- Generate a quantization parameter file for ncnn framework int8 inference☆518Updated 5 years ago
- ☆26Updated 9 years ago
- High performance Cross-platform Inference-engine, you could run Anakin on x86-cpu,arm, nv-gpu, amd-gpu,bitmain and cambricon devices.☆535Updated 3 years ago
- Heterogeneous Run Time version of Caffe. Added heterogeneous capabilities to the Caffe, uses heterogeneous computing infrastructure frame…☆269Updated 7 years ago
- heterogeneity-aware-lowering-and-optimization☆257Updated last year
- Graph Transforms to Quantize and Retrain Deep Neural Nets in TensorFlow☆168Updated 6 years ago
- BLISlab: A Sandbox for Optimizing GEMM☆552Updated 4 years ago
- TVM tutorial☆66Updated 6 years ago
- Training Deep Neural Networks with binary weights during propagations☆381Updated 9 years ago
- Place for meetup slides☆140Updated 5 years ago