levipereira / deepstream-yolo-triton-server-rtsp-out
The Purpose of this repository is to create a DeepStream/Triton-Server sample application that utilizes yolov7, yolov7-qat, yolov9 models to perform inference on video files or RTSP streams.
☆10Updated last year
Alternatives and similar repositories for deepstream-yolo-triton-server-rtsp-out:
Users that are interested in deepstream-yolo-triton-server-rtsp-out are comparing it to the libraries listed below
- ☆22Updated 2 years ago
- A project demonstrating how to use nvmetamux to run multiple models in parallel.☆99Updated 6 months ago
- NVIDIA DeepStream SDK 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 implementation for YOLO-Segmentation models☆60Updated last year
- ☆16Updated 3 years ago
- ☆53Updated 3 years ago
- Provides an ensemble model to deploy a YoloV8 ONNX model to Triton☆36Updated last year
- Custom gst-nvinfer for alignment in Deepstream☆26Updated 5 months ago
- This repository serves as an example of deploying the YOLO models on Triton Server for performance and testing purposes☆61Updated 11 months ago
- ☆15Updated last year
- C++ application to perform computer vision tasks using Nvidia Triton Server for model inference☆23Updated last week
- ☆53Updated 3 years ago
- A DeepStream sample application demonstrating end-to-end retail video analytics for brick-and-mortar retail.☆46Updated 2 years ago
- YOLO v5 Object Detection on Triton Inference Server☆15Updated 2 years ago
- Implementation of End-to-End YOLO Models for DeepStream☆50Updated 5 months ago
- This repository provides YOLOV5 GPU optimization sample☆102Updated 2 years ago
- Use Deepstream python API to extract the model output tensor and customize the post-processing of YOLO-Pose☆62Updated last year
- A project demonstrating how to make DeepStream docker images.☆75Updated 4 months ago
- Deepstream face detection & recognition☆23Updated 2 years ago
- 跟着Tensorrt_pro学习各种知识☆39Updated 2 years ago
- triton server ensemble model demo☆30Updated 2 years ago
- This repository provides optical character detection and recognition solution optimized on Nvidia devices.☆74Updated 2 weeks ago
- 对 tensorRT_Pro 开源项目理解☆20Updated 2 years ago
- ☆14Updated last year
- Implementation of YOLOv9 QAT optimized for deployment on TensorRT platforms.☆107Updated 2 months ago
- This repository utilizes the Triton Inference Server Client, which streamlines the complexity of model deployment.