intel / light-model-transformerLinks
☆71Updated 7 months ago
Alternatives and similar repositories for light-model-transformer
Users that are interested in light-model-transformer are comparing it to the libraries listed below
Sorting:
- Accelerating DNN Convolutional Layers with Micro-batches☆63Updated 5 years ago
- Library for fast image convolution in neural networks on Intel Architecture☆29Updated 7 years ago
- High Efficiency Convolution Kernel for Maxwell GPU Architecture☆134Updated 8 years ago
- DNN Inference with CPU, C++, ONNX support: Instant☆56Updated 6 years ago
- Intel® Optimization for Chainer*, a Chainer module providing numpy like API and DNN acceleration using MKL-DNN.☆170Updated 3 weeks ago
- Code for testing the native float16 matrix multiplication performance on Tesla P100 and V100 GPU based on cublasHgemm☆34Updated 5 years ago
- Symbolic Expression and Statement Module for new DSLs☆205Updated 4 years ago
- tutorial to optimize GEMM performance on android☆51Updated 9 years ago
- A prototype implementation of AllReduce collective communication routine.☆19Updated 6 years ago
- A simple memory manager for CUDA designed to help Deep Learning frameworks manage memory☆296Updated 6 years ago
- Greentea LibDNN - a universal convolution implementation supporting CUDA and OpenCL☆135Updated 8 years ago
- Kernel Fusion and Runtime Compilation Based on NNVM☆70Updated 8 years ago
- Python bindings for NVTX☆66Updated last year
- Chainer x TensorRT☆34Updated 6 years ago
- Menoh: fast DNN inference library with multiple programming language support☆281Updated 4 years ago
- Documentation for StreamExecutor open source proposal☆83Updated 9 years ago
- TensorFlow and TVM integration☆37Updated 5 years ago
- int8_t and int16_t matrix multiply based on https://arxiv.org/abs/1705.01991☆71Updated last year
- CNNs in Halide☆23Updated 9 years ago
- nGraph™ Backend for ONNX☆42Updated 2 years ago
- Add-on package for ONNX format support in Chainer☆86Updated 5 years ago
- Demitasse: SPMD Programing Implementation of Deep Neural Network Library for Mobile Devices(NeurIPS2016WS)☆23Updated 8 years ago
- Efficient Top-K implementation on the GPU☆179Updated 6 years ago
- Optimized half precision gemm assembly kernels (deprecated due to ROCm)☆47Updated 7 years ago
- Fork of https://source.codeaurora.org/quic/hexagon_nn/nnlib☆57Updated 2 years ago
- Efficient forward propagation for BCNNs☆50Updated 7 years ago
- Conversion to/from half-precision floating point formats☆354Updated 10 months ago
- Intel® Optimization for Chainer*☆82Updated 2 years ago
- Test winograd convolution written in TVM for CUDA and AMDGPU☆41Updated 6 years ago
- Optimizing Mobile Deep Learning on ARM GPU with TVM☆181Updated 6 years ago