18alantom / concurrent_inferenceLinks
An example of how to use the multiprocessing package along with PyTorch.
☆21Updated 4 years ago
Alternatives and similar repositories for concurrent_inference
Users that are interested in concurrent_inference are comparing it to the libraries listed below
Sorting:
- The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.☆139Updated last week
- This repository deploys YOLOv4 as an optimized TensorRT engine to Triton Inference Server☆286Updated 3 years ago
- Conversion of PyTorch Models into TFLite☆398Updated 2 years ago
- Computer Vision deployment tools for dummies and experts. CVU aims at making CV pipelines easier to build and consistent around platform…☆90Updated 2 years ago
- A simple Python example that uses Deepstream to process a video stream.☆36Updated 4 years ago
- Torchserve server using a YoloV5 model running on docker with GPU and static batch inference to perform production ready and real time in…☆100Updated 2 years ago
- ☆53Updated 3 years ago
- Sample app code for LPR deployment on DeepStream☆227Updated last year
- How to deploy ONNX models using DeepStream on Jetson Nano☆101Updated 4 years ago
- How to deploy open source models using DeepStream and Triton Inference Server☆86Updated last year
- Yolov5 TensorRT Implementations☆68Updated 3 years ago
- An example of using DeepStream SDK for redaction☆211Updated last year
- Face Recognition on NVIDIA Jetson (Nano) using TensorRT☆212Updated last year
- NVIDIA Deepstream 6.1 Python boilerplate☆146Updated 2 years ago
- ☆53Updated 3 years ago
- Face Mask Detection using NVIDIA Transfer Learning Toolkit (TLT) and DeepStream for COVID-19☆249Updated last year
- This script converts the ONNX/OpenVINO IR model to Tensorflow's saved_model, tflite, h5, tfjs, tftrt(TensorRT), CoreML, EdgeTPU, ONNX and…☆344Updated 3 years ago
- This repository utilizes the Triton Inference Server Client, which streamlines the complexity of model deployment.☆21Updated last year
- YOLOv4 accelerated wtih TensorRT and multi-stream input using Deepstream☆38Updated 4 years ago
- A DeepStream sample application demonstrating end-to-end retail video analytics for brick-and-mortar retail.☆52Updated 3 years ago
- This is a repo with a Triton Server deployment template☆24Updated last year
- A Python wrapper written in C++11 for past frame tracking result metadata classes in Nvidia's DeepStream framework, providing functionali…☆31Updated 5 years ago
- YOLOv4 Implemented in Tensorflow 2.0. Convert YOLO v4 .weights to .pb and .tflite format for tensorflow and tensorflow lite.☆62Updated 4 years ago
- A shared library of on-demand DeepStream Pipeline Services for Python and C/C++☆330Updated 9 months ago
- Sample app code for deploying TAO Toolkit trained models to Triton☆90Updated last year
- Social Distancing Detector using deep learning and capable to run on edge AI devices such as NVIDIA Jetson, Google Coral, and more.☆142Updated 2 years ago
- Deepstream app use retinaface and arcface for face recognition.☆77Updated last year
- A project demonstrating how to use nvmetamux to run multiple models in parallel.☆113Updated last year
- Computer Vision via Deepstream - Nvidia☆16Updated 4 years ago
- This repository serves as an example of deploying the YOLO models on Triton Server for performance and testing purposes☆69Updated 2 months ago