openvino-dev-samples / decode-infer-on-GPU
This sample shows how to use the oneAPI Video Processing Library (oneVPL) to perform a single and multi-source video decode and preprocess and inference using OpenVINO to show the device surface sharing (zero copy)
☆13Updated last year
Alternatives and similar repositories for decode-infer-on-GPU:
Users that are interested in decode-infer-on-GPU are comparing it to the libraries listed below
- cpp rotation album,基于cpp eigen实现的3d旋转相册,GAMES101复现内容☆12Updated 2 years ago
- A tool convert TensorRT engine/plan to a fake onnx☆38Updated 2 years ago
- Quantize yolov5 using pytorch_quantization.🚀🚀🚀☆14Updated last year
- DETR tensor去除推理过程无用辅助头+fp16部署再次加速+解决转tensorrt 输出全为0问题的新方法。☆12Updated last year
- learn TensorRT from scratch🥰☆13Updated 6 months ago
- ☆10Updated 8 months ago
- ☆14Updated 3 years ago
- ffmpeg+cuvid+tensorrt+multicamera☆12Updated 3 months ago
- Sample projects for InferenceHelper, a Helper Class for Deep Learning Inference Frameworks: TensorFlow Lite, TensorRT, OpenCV, ncnn, MNN,…☆20Updated 3 years ago
- Quantize yolov7 using pytorch_quantization.🚀🚀🚀☆10Updated last year
- The repository supports TensorRT, QNN platform inference, 2D obstacle detection yolo series (yolov5-yolo11), semantic segmentation and so…☆16Updated this week
- inference on tvm runtime using c++ with gpu enabled☆10Updated 6 years ago
- the C++ version of thundernet with ncnn☆14Updated 4 years ago