ravi-teja-mullapudi / Halide-NNLinks
CNNs in Halide
☆23Updated 10 years ago
Alternatives and similar repositories for Halide-NN
Users that are interested in Halide-NN are comparing it to the libraries listed below
Sorting:
- Optimized half precision gemm assembly kernels (deprecated due to ROCm)☆47Updated 8 years ago
- Proof-of-Concept CNN in Halide☆22Updated 9 years ago
- ☆101Updated 6 years ago
- flexible-gemm conv of deepcore☆17Updated 6 years ago
- Greentea LibDNN - a universal convolution implementation supporting CUDA and OpenCL☆137Updated 8 years ago
- Symbolic Expression and Statement Module for new DSLs☆205Updated 5 years ago
- ICML2017 MEC: Memory-efficient Convolution for Deep Neural Network C++实现(非官方)☆17Updated 6 years ago
- Library for fast image convolution in neural networks on Intel Architecture☆30Updated 8 years ago
- A domain-specific language and compiler for image processing☆77Updated 4 years ago
- TVM stack: exploring the incredible explosion of deep-learning frameworks and how to bring them together☆64Updated 7 years ago
- Winograd-based convolution implementation in OpenCL☆28Updated 8 years ago
- a heterogeneous multiGPU level-3 BLAS library☆46Updated 6 years ago
- Kernel Fusion and Runtime Compilation Based on NNVM☆72Updated 9 years ago
- portDNN is a library implementing neural network algorithms written using SYCL☆113Updated last year
- ☆11Updated 5 years ago
- CLTune: An automatic OpenCL & CUDA kernel tuner☆183Updated 3 years ago
- tutorial to optimize GEMM performance on android☆51Updated 9 years ago
- Fork of https://source.codeaurora.org/quic/hexagon_nn/nnlib☆58Updated 2 years ago
- Test winograd convolution written in TVM for CUDA and AMDGPU☆41Updated 7 years ago
- Third party assembler and GEMM library for NVIDIA Kepler GPU☆85Updated 6 years ago
- THIS REPOSITORY HAS MOVED TO github.com/nvidia/cub, WHICH IS AUTOMATICALLY MIRRORED HERE.☆85Updated last year
- CaffePresso: An Optimized Library for Deep Learning on Embedded Accelerator-based platforms☆87Updated last year
- Benchmark of TVM quantized model on CUDA☆112Updated 5 years ago
- Optimizing Mobile Deep Learning on ARM GPU with TVM☆181Updated 7 years ago
- A portable high-level API with CUDA or OpenCL back-end☆55Updated 8 years ago
- Accelerating DNN Convolutional Layers with Micro-batches☆63Updated 5 years ago
- Fast matrix multiplication☆31Updated 4 years ago
- ONNX Parser is a tool that automatically generates openvx inference code (CNN) from onnx binary model files.☆18Updated 7 years ago
- Code appendix to an OpenCL matrix-multiplication tutorial☆178Updated 8 years ago
- Full-speed Array of Structures access☆176Updated 2 years ago