18alantom / concurrent_inferenceLinks
An example of how to use the multiprocessing package along with PyTorch.
☆21Updated 4 years ago
Alternatives and similar repositories for concurrent_inference
Users that are interested in concurrent_inference are comparing it to the libraries listed below
Sorting:
- Advanced inference pipeline using NVIDIA Triton Inference Server for CRAFT Text detection (Pytorch), included converter from Pytorch -> O…☆33Updated 3 years ago
- NVIDIA Jetson amd Deepstream Python Examples☆30Updated 4 years ago
- This repository deploys YOLOv4 as an optimized TensorRT engine to Triton Inference Server☆285Updated 3 years ago
- The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.☆135Updated 3 weeks ago
- ☆53Updated 3 years ago
- Custom gst-nvinfer for alignment in Deepstream☆27Updated 7 months ago
- A simple Python example that uses Deepstream to process a video stream.☆35Updated 4 years ago
- How to deploy open source models using DeepStream and Triton Inference Server☆80Updated last year
- triton server ensemble model demo☆30Updated 3 years ago
- NVIDIA Deepstream 6.1 Python boilerplate☆139Updated last year
- ☆53Updated 3 years ago
- By the end of this post, you will learn how to: Train a SOTA YOLOv5 model on your own data. Sparsify the model using SparseML quantizati…☆55Updated 2 years ago
- An example of using DeepStream SDK for redaction☆209Updated last year
- Yolov5 TensorRT Implementations☆67Updated 2 years ago
- This sample shows how to train and deploy a deep learning model for the real time redaction of faces from video streams using the NVIDIA …☆36Updated last year
- Computer Vision deployment tools for dummies and experts. CVU aims at making CV pipelines easier to build and consistent around platform…☆88Updated last year
- YOLOv4 accelerated wtih TensorRT and multi-stream input using Deepstream☆36Updated 4 years ago
- A project demonstrating how to use nvmetamux to run multiple models in parallel.☆102Updated 8 months ago
- ☆52Updated 4 years ago
- Torchserve server using a YoloV5 model running on docker with GPU and static batch inference to perform production ready and real time in…☆97Updated 2 years ago
- Triton Migration Guide for DeepStreamSDK.☆14Updated last year
- Utility scripts for editing or modifying onnx models. Utility scripts to summarize onnx model files along with visualization for loop ope…☆79Updated 3 years ago
- A Python wrapper written in C++11 for past frame tracking result metadata classes in Nvidia's DeepStream framework, providing functionali…☆31Updated 4 years ago
- Deepstream app use retinaface and arcface for face recognition.☆70Updated 9 months ago
- Inference of quantization aware trained networks using TensorRT☆82Updated 2 years ago
- A project demonstrating how to make DeepStream docker images.☆78Updated 6 months ago
- Sample app code for LPR deployment on DeepStream☆222Updated 8 months ago
- Exporting YOLOv5 for CPU inference with ONNX and OpenVINO☆37Updated 10 months ago
- This repository utilizes the Triton Inference Server Client, which streamlines the complexity of model deployment.☆19Updated 9 months ago
- This is a face recognition app built on DeepStream reference app.☆38Updated 4 years ago