intel / inference-engine-nodeLinks
Bringing the hardware accelerated deep learning inference to Node.js and Electron.js apps.
☆33Updated 3 years ago
Alternatives and similar repositories for inference-engine-node
Users that are interested in inference-engine-node are comparing it to the libraries listed below
Sorting:
- ☆56Updated 5 years ago
- Explore the Capabilities of the TensorRT Platform☆264Updated 4 years ago
- This repository contains the results and code for the MLPerf™ Inference v0.5 benchmark.☆55Updated 5 months ago
- Deep Learning Inference benchmark. Supports OpenVINO™ toolkit, TensorFlow, TensorFlow Lite, ONNX Runtime, OpenCV DNN, MXNet, PyTorch, Apa…☆34Updated last week
- A quick view of high-performance convolution neural networks (CNNs) inference engines on mobile devices.☆151Updated 3 years ago
- Optimizing Mobile Deep Learning on ARM GPU with TVM☆181Updated 7 years ago
- Benchmark of TVM quantized model on CUDA☆112Updated 5 years ago
- This repository contains the results and code for the MLPerf™ Inference v1.0 benchmark.☆32Updated 5 months ago
- Issues related to MLPerf® Inference policies, including rules and suggested changes☆63Updated 2 weeks ago
- Code for testing the native float16 matrix multiplication performance on Tesla P100 and V100 GPU based on cublasHgemm☆35Updated 6 years ago
- A scalable inference server for models optimized with OpenVINO™☆816Updated this week
- Parallel CUDA implementation of NON maximum Suppression☆81Updated 5 years ago
- To make it easy to benchmark AI accelerators☆193Updated 3 years ago
- benchmark for embededded-ai deep learning inference engines, such as NCNN / TNN / MNN / TensorFlow Lite etc.☆204Updated 4 years ago
- heterogeneity-aware-lowering-and-optimization☆257Updated 2 years ago
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆90Updated 5 months ago
- deepstream 4.x samples to deploy TLT training models☆85Updated 5 years ago
- ☆26Updated 3 years ago
- Intel® AI Reference Models: contains Intel optimizations for running deep learning workloads on Intel® Xeon® Scalable processors and Inte…☆724Updated this week
- TVM tutorial☆66Updated 6 years ago
- DeepDetect performance sheet☆93Updated 6 years ago
- Repository for OpenVINO's extra modules☆161Updated last week
- ☆42Updated 7 years ago
- OpenVINO™ integration with TensorFlow☆178Updated last year
- This repository contains the results and code for the MLPerf™ Training v0.6 benchmark.☆42Updated 2 years ago
- Benchmarking Neural Network Inference on Mobile Devices☆384Updated 2 years ago
- The framework to generate a Dockerfile, build, test, and deploy a docker image with OpenVINO™ toolkit.☆68Updated last week
- Tutorial for Using Custom Layers with OpenVINO (Intel Deep Learning Toolkit)☆106Updated 6 years ago
- Heterogeneous Run Time version of TensorFlow. Added heterogeneous capabilities to the TensorFlow, uses heterogeneous computing infrastruc…☆36Updated 7 years ago
- Inference of quantization aware trained networks using TensorRT☆83Updated 2 years ago