itlab-vision / dl-benchmarkLinks
Deep Learning Inference benchmark. Supports OpenVINO™ toolkit, TensorFlow, TensorFlow Lite, ONNX Runtime, OpenCV DNN, MXNet, PyTorch, Apache TVM, ncnn, PaddlePaddle, etc.
☆32Updated this week
Alternatives and similar repositories for dl-benchmark
Users that are interested in dl-benchmark are comparing it to the libraries listed below
Sorting:
- Repository for OpenVINO's extra modules☆125Updated last week
- Create a concurrent video analysis pipeline featuring multistream face and human pose detection, vehicle attribute detection, and the abi…☆52Updated 10 months ago
- Tutorial for Using Custom Layers with OpenVINO (Intel Deep Learning Toolkit)☆106Updated 5 years ago
- Deep learning based human pose estimation demo in Python using Intel OpenVINO toolkit☆18Updated 3 years ago
- deepstream 4.x samples to deploy TLT training models☆85Updated 5 years ago
- ☆43Updated last year
- The framework to generate a Dockerfile, build, test, and deploy a docker image with OpenVINO™ toolkit.☆66Updated last week
- OpenVINO ARM64 Notes☆49Updated 5 years ago
- Openvino environment with docker☆69Updated 4 years ago
- Training Toolbox for Caffe☆48Updated 11 months ago
- AI-related samples made available by the DevTech ProViz team☆30Updated last year
- Official scripts modified by yours truly (@starhopp3r) and @jameshi16 that allow OpenVINO™ to run on Ubuntu 18.04.☆19Updated 3 years ago
- ☆120Updated 4 years ago
- Bringing the hardware accelerated deep learning inference to Node.js and Electron.js apps.☆33Updated 2 years ago
- RidgeRun Inference Framework☆27Updated 2 years ago
- How to export PyTorch models with unsupported layers to ONNX and then to Intel OpenVINO☆27Updated 3 months ago
- ☆79Updated 3 years ago
- Semi-automated OpenVINO benchmark_app with variable parameters. User can specify multiple options for any parameters in the benchmark_app…☆10Updated 3 years ago
- How to deploy open source models using DeepStream and Triton Inference Server☆79Updated 11 months ago
- Learn about the workflow using Intel® Distribution of OpenVINO™ toolkit to accelerate vision, automatic speech recognition, natural langu…☆301Updated 10 months ago
- [4-5 FPS / Core m3 CPU only] [11 FPS / Core i7 CPU only] OpenVINO+DeeplabV3+LattePandaAlpha/LaptopPC. CPU / GPU / NCS. RealTime semantic-…☆45Updated 6 years ago
- OpenVINO Inference Engine Python API sample code - NCS2☆34Updated 6 years ago
- OpenALPR Plug-in for DeepStream on Jetson☆29Updated 6 years ago
- A GStreamer Deep Learning Inference Framework☆128Updated last year
- Practice git, Travis CI and Intel OpenVINO☆14Updated 3 years ago
- ☆28Updated last year
- This sample shows how to use the oneAPI Video Processing Library (oneVPL) to perform a single and multi-source video decode and preproces…☆13Updated last year
- How to run Keras model inference x3 times faster with CPU and Intel OpenVINO☆34Updated 6 years ago
- How to run SSD Mobilenet V2 object detection on Jetson Nano at 20+ FPS☆33Updated 5 years ago
- edge/mobile transformer based Vision DNN inference benchmark☆16Updated 4 months ago