intel / inference-engine-nodeLinks
Bringing the hardware accelerated deep learning inference to Node.js and Electron.js apps.
☆32Updated 2 years ago
Alternatives and similar repositories for inference-engine-node
Users that are interested in inference-engine-node are comparing it to the libraries listed below
Sorting:
- Heterogeneous Run Time version of TensorFlow. Added heterogeneous capabilities to the TensorFlow, uses heterogeneous computing infrastruc…☆36Updated 7 years ago
- ☆57Updated 4 years ago
- This repository contains the results and code for the MLPerf™ Inference v0.5 benchmark.☆55Updated 2 weeks ago
- NVIDIA DeepStream SDK☆28Updated 6 years ago
- Optimizing Mobile Deep Learning on ARM GPU with TVM☆181Updated 6 years ago
- Explore the Capabilities of the TensorRT Platform☆264Updated 3 years ago
- DeepDetect performance sheet☆93Updated 5 years ago
- OpenVINO ARM64 Notes☆49Updated 5 years ago
- Benchmark of TVM quantized model on CUDA☆111Updated 5 years ago
- TensorFlow-nGraph bridge☆136Updated 4 years ago
- ChatBot: sample for TensorRT inference with a TF model☆46Updated 7 years ago
- A quick view of high-performance convolution neural networks (CNNs) inference engines on mobile devices.☆150Updated 3 years ago
- Additions and patches to Caffe framework for use with Synopsys DesignWare EV Family of Processors☆22Updated 8 months ago
- ☆42Updated 6 years ago
- TensorFlow and TVM integration☆37Updated 5 years ago
- Static analysis framework for analyzing programs written in TVM's Relay IR.☆28Updated 5 years ago
- Training Toolbox for Caffe☆48Updated last year
- ☆19Updated last year
- The NNEF Tools repository contains tools to generate and consume NNEF documents☆226Updated last week
- Create a concurrent video analysis pipeline featuring multistream face and human pose detection, vehicle attribute detection, and the abi…☆52Updated 11 months ago
- Repository for OpenVINO's extra modules☆130Updated this week
- Using TensorRT to implement and accelerate YOLO v3. Multi-scale and NMS are included. The acceleration ratio reaches 3 compared to the o…☆43Updated 6 years ago
- ONNX Parser is a tool that automatically generates openvx inference code (CNN) from onnx binary model files.☆18Updated 6 years ago
- Accelerating DNN Convolutional Layers with Micro-batches☆63Updated 5 years ago
- benchmark for embededded-ai deep learning inference engines, such as NCNN / TNN / MNN / TensorFlow Lite etc.☆204Updated 4 years ago
- CK-Caffe public benchmarking data on Firefly-RK3399☆37Updated 8 years ago
- npcomp - An aspirational MLIR based numpy compiler☆51Updated 4 years ago
- Daquexian's NNAPI Library. ONNX + Android NNAPI☆350Updated 5 years ago
- CNN model inference benchmarks for some popular deep learning frameworks☆52Updated 5 years ago
- This code is an implementation of a trained YOLO neural network used with the TensorRT framework.☆88Updated 8 years ago