intel / inference-engine-node
Bringing the hardware accelerated deep learning inference to Node.js and Electron.js apps.
☆33Updated last year
Related projects ⓘ
Alternatives and complementary repositories for inference-engine-node
- This repository contains the results and code for the MLPerf™ Inference v0.5 benchmark.☆55Updated last year
- ☆58Updated 4 years ago
- TVM stack: exploring the incredible explosion of deep-learning frameworks and how to bring them together☆64Updated 6 years ago
- Optimizing Mobile Deep Learning on ARM GPU with TVM☆179Updated 6 years ago
- ☆44Updated 4 months ago
- Collection of CUDA benchmarks, with a focus on unified vs. explicit memory management.☆20Updated 5 years ago
- Benchmark of TVM quantized model on CUDA☆112Updated 4 years ago
- ONNX Parser is a tool that automatically generates openvx inference code (CNN) from onnx binary model files.☆17Updated 5 years ago
- Experiments evaluating preemption on the NVIDIA Pascal architecture☆18Updated 8 years ago
- A quick view of high-performance convolution neural networks (CNNs) inference engines on mobile devices.☆150Updated 2 years ago
- CNN model inference benchmarks for some popular deep learning frameworks☆52Updated 5 years ago
- ☆67Updated last year
- Code for testing the native float16 matrix multiplication performance on Tesla P100 and V100 GPU based on cublasHgemm☆34Updated 5 years ago
- This repository contains the results and code for the MLPerf™ Training v0.6 benchmark.☆42Updated last year
- Benchmark scripts for TVM☆73Updated 2 years ago
- Accelerating DNN Convolutional Layers with Micro-batches☆64Updated 4 years ago
- Graph Transforms to Quantize and Retrain Deep Neural Nets in TensorFlow☆168Updated 4 years ago
- OpenVX API and Extension Registry.☆45Updated 2 weeks ago
- To make it easy to benchmark AI accelerators☆179Updated last year
- TensorFlow and TVM integration☆38Updated 4 years ago
- This repository contains the results and code for the MLPerf™ Inference v1.0 benchmark.☆30Updated last year
- benchmark for embededded-ai deep learning inference engines, such as NCNN / TNN / MNN / TensorFlow Lite etc.☆203Updated 3 years ago
- ☆26Updated 7 years ago
- Static analysis framework for analyzing programs written in TVM's Relay IR.☆27Updated 5 years ago
- MIVisionX toolkit is a set of comprehensive computer vision and machine intelligence libraries, utilities, and applications bundled into …☆186Updated this week
- Open Enclave port of the ONNX runtime for confidential inferencing on Azure Confidential Computing☆34Updated last year
- npcomp - An aspirational MLIR based numpy compiler☆51Updated 4 years ago
- tophub autotvm log collections☆70Updated last year