Bobo-y / triton_ensemble_model_demo
triton server ensemble model demo
☆30Updated 3 years ago
Alternatives and similar repositories for triton_ensemble_model_demo
Users that are interested in triton_ensemble_model_demo are comparing it to the libraries listed below
Sorting:
- ☆53Updated 3 years ago
- ☆53Updated 3 years ago
- Retinaface get 80.99% in widerface hard val using mobilenet0.25.☆24Updated 5 years ago
- Advanced inference pipeline using NVIDIA Triton Inference Server for CRAFT Text detection (Pytorch), included converter from Pytorch -> O…☆33Updated 3 years ago
- Compare multiple optimization methods on triton to imporve model service performance☆50Updated last year
- tensorrt yolov7 without onnxparser☆24Updated 2 years ago
- 将Yolov3模型转成可以进行动态Batch的TensorRT推理以及Triton Inference Serving上部署的TensorRT模型☆28Updated 4 years ago
- ☆16Updated 3 years ago
- Using TensorRT for Inference Model Deployment.☆48Updated last year
- A multi object tracking Library Based on tensorrt☆54Updated 3 years ago
- deploy yolox algorithm use deepstream☆89Updated 3 years ago
- ☆24Updated 4 years ago
- yolov5s_ncnn_inference pipeline☆21Updated 4 years ago
- ☆63Updated 4 years ago
- Face Recognition with RetinaFace and ArcFace.☆82Updated 3 years ago
- scrfd使用onnxruntime-gpu跟tensorrt加速☆23Updated 3 years ago
- ☆22Updated 2 years ago
- ☆24Updated 4 years ago
- TensorRT plugin forDCNv2 layer in ONNX model☆60Updated 4 years ago
- Implement yolov5 with Tensorrt C++ api, and integrate batchedNMSPlugin. A Python wrapper is also provided.☆49Updated 3 years ago
- YOLO v5 Object Detection on Triton Inference Server☆15Updated 2 years ago
- ☆79Updated 3 years ago
- Implementation for the paper 'YOLO-ReT: Towards High Accuracy Real-time Object Detection on Edge GPUs'☆96Updated last year
- A project demonstrating how to use nvmetamux to run multiple models in parallel.☆101Updated 6 months ago
- C++ application to perform computer vision tasks using Nvidia Triton Server for model inference☆23Updated 3 weeks ago
- 自然场景检测DBNet网络的tensorrt版本☆22Updated 4 years ago
- Magface Triton Inferece Server Using Tensorrt☆16Updated 3 years ago
- YOLOv5 Quantization Aware Training with TensorRT☆15Updated 2 years ago
- ☆15Updated last year
- How to deploy open source models using DeepStream and Triton Inference Server☆79Updated 10 months ago