chrischoy / CUDA-FFT-ConvolutionLinks
CUDA FFT convolution
☆15Updated 10 years ago
Alternatives and similar repositories for CUDA-FFT-Convolution
Users that are interested in CUDA-FFT-Convolution are comparing it to the libraries listed below
Sorting:
- Optimized half precision gemm assembly kernels (deprecated due to ROCm)☆47Updated 8 years ago
- Greentea LibDNN - a universal convolution implementation supporting CUDA and OpenCL☆136Updated 8 years ago
- tutorial to optimize GEMM performance on android☆51Updated 9 years ago
- Library for fast image convolution in neural networks on Intel Architecture☆31Updated 8 years ago
- A simple memory manager for CUDA designed to help Deep Learning frameworks manage memory☆297Updated 6 years ago
- Some C++ codes for computing a 1D and 2D convolution product using the FFT implemented with the GSL or FFTW☆58Updated 12 years ago
- A fast deep neural network library (CPU) for speech recognition☆84Updated 6 years ago
- Collective Knowledge repository for NVIDIA's TensorRT☆37Updated 4 years ago
- kmeans clustering with multi-GPU capabilities☆119Updated 2 years ago
- a heterogeneous multiGPU level-3 BLAS library☆45Updated 5 years ago
- Vector Math Library☆79Updated 2 weeks ago
- Fast matrix multiplication☆29Updated 4 years ago
- High-Performance Tensor Transpose library☆200Updated 2 years ago
- A portable high-level API with CUDA or OpenCL back-end☆54Updated 7 years ago
- Intel® Optimization for Chainer*, a Chainer module providing numpy like API and DNN acceleration using MKL-DNN.☆172Updated last week
- CNN accelerated by cuda. Test on mnist and finilly get 99.76%☆186Updated 7 years ago
- C++ interface for mxnet☆115Updated 8 years ago
- Code for testing the native float16 matrix multiplication performance on Tesla P100 and V100 GPU based on cublasHgemm☆34Updated 5 years ago
- Kernel Fusion and Runtime Compilation Based on NNVM☆70Updated 8 years ago
- Code appendix to an OpenCL matrix-multiplication tutorial☆173Updated 8 years ago
- TTC: A high-performance Compiler for Tensor Transpositions☆20Updated 7 years ago
- RDMA Optimization on MXNet☆14Updated 7 years ago
- High Efficiency Convolution Kernel for Maxwell GPU Architecture☆134Updated 8 years ago
- CLTune: An automatic OpenCL & CUDA kernel tuner☆180Updated 2 years ago
- ☆68Updated 2 years ago
- ONNX Parser is a tool that automatically generates openvx inference code (CNN) from onnx binary model files.☆18Updated 6 years ago
- a c++/cuda template library for tensor lazy evaluation☆161Updated 2 years ago
- Heterogeneous Run Time version of MXNet. Added heterogeneous capabilities to the MXNet, uses heterogeneous computing infrastructure frame…☆72Updated 7 years ago
- Test winograd convolution written in TVM for CUDA and AMDGPU☆41Updated 6 years ago
- Custom fork containing our own python backend for integration into neon☆15Updated 2 years ago