intel / light-model-transformerLinks
☆72Updated last week
Alternatives and similar repositories for light-model-transformer
Users that are interested in light-model-transformer are comparing it to the libraries listed below
Sorting:
- Accelerating DNN Convolutional Layers with Micro-batches☆63Updated 5 years ago
- Intel® Optimization for Chainer*, a Chainer module providing numpy like API and DNN acceleration using MKL-DNN.☆172Updated 3 weeks ago
- Library for fast image convolution in neural networks on Intel Architecture☆31Updated 8 years ago
- Symbolic Expression and Statement Module for new DSLs☆206Updated 4 years ago
- Code for testing the native float16 matrix multiplication performance on Tesla P100 and V100 GPU based on cublasHgemm☆34Updated 6 years ago
- High Efficiency Convolution Kernel for Maxwell GPU Architecture☆135Updated 8 years ago
- A simple memory manager for CUDA designed to help Deep Learning frameworks manage memory☆298Updated 6 years ago
- Greentea LibDNN - a universal convolution implementation supporting CUDA and OpenCL☆137Updated 8 years ago
- Conversion to/from half-precision floating point formats☆371Updated last month
- Fork of https://source.codeaurora.org/quic/hexagon_nn/nnlib☆58Updated 2 years ago
- tutorial to optimize GEMM performance on android☆52Updated 9 years ago
- TensorFlow and TVM integration☆36Updated 5 years ago
- THIS REPOSITORY HAS MOVED TO github.com/nvidia/cub, WHICH IS AUTOMATICALLY MIRRORED HERE.☆84Updated last year
- CNNs in Halide☆23Updated 9 years ago
- A CUDNN minimal deep learning training code sample using LeNet.☆268Updated 2 years ago
- Optimizing Mobile Deep Learning on ARM GPU with TVM☆181Updated 6 years ago
- Kernel Fusion and Runtime Compilation Based on NNVM☆71Updated 8 years ago
- ☆68Updated 3 years ago
- Documentation for StreamExecutor open source proposal☆83Updated 9 years ago
- Efficient Top-K implementation on the GPU☆186Updated 6 years ago
- Demitasse: SPMD Programing Implementation of Deep Neural Network Library for Mobile Devices(NeurIPS2016WS)☆23Updated 8 years ago
- Python bindings for NVTX☆66Updated 2 years ago
- ONNX Parser is a tool that automatically generates openvx inference code (CNN) from onnx binary model files.☆18Updated 6 years ago
- kmeans clustering with multi-GPU capabilities☆119Updated 2 years ago
- Compute Library for Deep Neural Networks (clDNN)☆576Updated 2 years ago
- Optimized half precision gemm assembly kernels (deprecated due to ROCm)☆47Updated 8 years ago
- Guide for building custom op for TensorFlow☆381Updated 2 years ago
- DNN Inference with CPU, C++, ONNX support: Instant☆56Updated 6 years ago
- A prototype implementation of AllReduce collective communication routine.☆19Updated 7 years ago
- PyTorch C++ API Documentation☆237Updated this week