intel / light-model-transformer
☆71Updated 5 months ago
Alternatives and similar repositories for light-model-transformer:
Users that are interested in light-model-transformer are comparing it to the libraries listed below
- Library for fast image convolution in neural networks on Intel Architecture☆29Updated 7 years ago
- Accelerating DNN Convolutional Layers with Micro-batches☆63Updated 4 years ago
- DNN Inference with CPU, C++, ONNX support: Instant☆56Updated 6 years ago
- Symbolic Expression and Statement Module for new DSLs☆205Updated 4 years ago
- Intel® Optimization for Chainer*, a Chainer module providing numpy like API and DNN acceleration using MKL-DNN.☆170Updated last month
- High Efficiency Convolution Kernel for Maxwell GPU Architecture☆134Updated 7 years ago
- Code for testing the native float16 matrix multiplication performance on Tesla P100 and V100 GPU based on cublasHgemm☆34Updated 5 years ago
- tutorial to optimize GEMM performance on android☆51Updated 9 years ago
- Greentea LibDNN - a universal convolution implementation supporting CUDA and OpenCL☆135Updated 8 years ago
- A prototype implementation of AllReduce collective communication routine.☆19Updated 6 years ago
- TensorFlow and TVM integration☆37Updated 4 years ago
- Fork of https://source.codeaurora.org/quic/hexagon_nn/nnlib☆57Updated 2 years ago
- Chainer x TensorRT☆34Updated 6 years ago
- Documentation for StreamExecutor open source proposal☆83Updated 9 years ago
- A simple memory manager for CUDA designed to help Deep Learning frameworks manage memory☆297Updated 6 years ago
- fast log and exp functions for AVX2/AVX-512☆227Updated last month
- THIS REPOSITORY HAS MOVED TO github.com/nvidia/cub, WHICH IS AUTOMATICALLY MIRRORED HERE.☆84Updated last year
- how to design cpu gemm on x86 with avx256, that can beat openblas.☆70Updated 6 years ago
- Kernel Fusion and Runtime Compilation Based on NNVM☆70Updated 8 years ago
- Optimized half precision gemm assembly kernels (deprecated due to ROCm)☆47Updated 7 years ago
- A Neural Network Toolkit.☆174Updated 5 years ago
- Test winograd convolution written in TVM for CUDA and AMDGPU☆41Updated 6 years ago
- ☆30Updated 7 years ago
- ONNX Parser is a tool that automatically generates openvx inference code (CNN) from onnx binary model files.☆18Updated 6 years ago
- Benchmark of TVM quantized model on CUDA☆111Updated 4 years ago
- Optimizing Mobile Deep Learning on ARM GPU with TVM☆181Updated 6 years ago
- Demitasse: SPMD Programing Implementation of Deep Neural Network Library for Mobile Devices(NeurIPS2016WS)☆23Updated 8 years ago
- Efficient Top-K implementation on the GPU☆176Updated 6 years ago
- TVM integration into PyTorch☆452Updated 5 years ago
- Menoh: fast DNN inference library with multiple programming language support☆281Updated 4 years ago