intel / light-model-transformerLinks
☆72Updated last month
Alternatives and similar repositories for light-model-transformer
Users that are interested in light-model-transformer are comparing it to the libraries listed below
Sorting:
- Accelerating DNN Convolutional Layers with Micro-batches☆63Updated 5 years ago
- Intel® Optimization for Chainer*, a Chainer module providing numpy like API and DNN acceleration using MKL-DNN.☆172Updated last week
- High Efficiency Convolution Kernel for Maxwell GPU Architecture☆137Updated 8 years ago
- Code for testing the native float16 matrix multiplication performance on Tesla P100 and V100 GPU based on cublasHgemm☆35Updated 6 years ago
- Greentea LibDNN - a universal convolution implementation supporting CUDA and OpenCL☆137Updated 8 years ago
- Library for fast image convolution in neural networks on Intel Architecture☆31Updated 8 years ago
- A simple memory manager for CUDA designed to help Deep Learning frameworks manage memory☆298Updated 6 years ago
- Symbolic Expression and Statement Module for new DSLs☆205Updated 5 years ago
- CNNs in Halide☆23Updated 10 years ago
- Fork of https://source.codeaurora.org/quic/hexagon_nn/nnlib☆58Updated 2 years ago
- DNN Inference with CPU, C++, ONNX support: Instant☆56Updated 7 years ago
- Optimized half precision gemm assembly kernels (deprecated due to ROCm)☆47Updated 8 years ago
- A CUDNN minimal deep learning training code sample using LeNet.☆269Updated 2 years ago
- Open single and half precision gemm implementations☆392Updated 2 years ago
- ☆68Updated 3 years ago
- CNN accelerated by cuda. Test on mnist and finilly get 99.76%☆185Updated 8 years ago
- Kernel Fusion and Runtime Compilation Based on NNVM☆71Updated 8 years ago
- kmeans clustering with multi-GPU capabilities☆119Updated 2 years ago
- Conversion to/from half-precision floating point formats☆371Updated 2 months ago
- tutorial to optimize GEMM performance on android☆51Updated 9 years ago
- A prototype implementation of AllReduce collective communication routine.☆19Updated 7 years ago
- Caffe for Sparse Convolutional Neural Network☆237Updated 2 years ago
- Optimizing Mobile Deep Learning on ARM GPU with TVM☆181Updated 7 years ago
- THIS REPOSITORY HAS MOVED TO github.com/nvidia/cub, WHICH IS AUTOMATICALLY MIRRORED HERE.☆84Updated last year
- Winograd minimal convolution algorithm generator for convolutional neural networks.☆622Updated 5 years ago
- Efficient Top-K implementation on the GPU☆188Updated 6 years ago
- flexible-gemm conv of deepcore☆17Updated 5 years ago
- Chainer x TensorRT☆34Updated 6 years ago
- Python bindings for NVTX☆66Updated 2 years ago
- Demitasse: SPMD Programing Implementation of Deep Neural Network Library for Mobile Devices(NeurIPS2016WS)☆23Updated 8 years ago