openvino-dev-samples / decode-infer-on-GPULinks
This sample shows how to use the oneAPI Video Processing Library (oneVPL) to perform a single and multi-source video decode and preprocess and inference using OpenVINO to show the device surface sharing (zero copy)
☆13Updated 2 years ago
Alternatives and similar repositories for decode-infer-on-GPU
Users that are interested in decode-infer-on-GPU are comparing it to the libraries listed below
Sorting:
- A tool convert TensorRT engine/plan to a fake onnx☆39Updated 2 years ago
- inference on tvm runtime using c++ with gpu enabled☆10Updated 7 years ago
- the C++ version of thundernet with ncnn☆14Updated 4 years ago
- ☆16Updated 3 months ago
- OpenVINO™ optimization for PointPillars*☆32Updated last month
- Sample projects for InferenceHelper, a Helper Class for Deep Learning Inference Frameworks: TensorFlow Lite, TensorRT, OpenCV, ncnn, MNN,…☆20Updated 3 years ago
- Tengine 管子是用来快速生产 demo 的辅助工具☆13Updated 3 years ago
- YoloV8 segmentation NPU for the RK 3566/68/88☆14Updated last year
- ☆24Updated 2 years ago
- ☆14Updated 3 years ago
- learn TensorRT from scratch🥰☆15Updated 8 months ago
- ☆27Updated last week
- ffmpeg+cuvid+tensorrt+multicamera☆12Updated 5 months ago
- Quantize yolov5 using pytorch_quantization.🚀🚀🚀☆14Updated last year
- ☆12Updated 2 years ago
- Android implementation for SuperPoint, including NCNN and SNPE. Can also run on x64 machine.