k9ele7en / Triton-TensorRT-Inference-CRAFT-pytorch
Advanced inference pipeline using NVIDIA Triton Inference Server for CRAFT Text detection (Pytorch), included converter from Pytorch -> ONNX -> TensorRT, Inference pipelines (TensorRT, Triton server - multi-format). Supported model format for Triton inference: TensorRT engine, Torchscript, ONNX
☆32Updated 3 years ago
Alternatives and similar repositories for Triton-TensorRT-Inference-CRAFT-pytorch:
Users that are interested in Triton-TensorRT-Inference-CRAFT-pytorch are comparing it to the libraries listed below
- How to inference Yolov5-Face on C++☆9Updated 2 years ago
- triton server ensemble model demo☆30Updated 2 years ago
- YOLOv7 training. Generates a head-only dataset in YOLO format. The labels included in the CrowdHuman dataset are Head and FullBody, but i…☆29Updated 8 months ago
- Magface Triton Inferece Server Using Tensorrt☆16Updated 3 years ago
- A tool convert TensorRT engine/plan to a fake onnx☆38Updated 2 years ago
- How to deploy open source models using DeepStream and Triton Inference Server☆77Updated 8 months ago
- 使用ONNXRuntime部署PP-YOLOE目标检测,支持PP-YOLOE-s、PP-YOLOE-m、PP-YOLOE-l、PP-YOLOE-x四种结构,包含C++和Python两个版本的程序☆18Updated 2 years ago
- ☆53Updated 3 years ago
- Wanwu models release, code will be released soon☆24Updated 2 years ago
- ☆53Updated 3 years ago
- Yet another ssd, with its runtime stack for libtorch, onnx and specialized accelerators.☆26Updated 3 years ago
- C++ implementation of YOLOv6 using TensorRT☆28Updated 2 years ago
- YOLO v5 Object Detection on Triton Inference Server☆15Updated last year
- an SDK about how to use openvino model transformed from yolov5☆36Updated 4 years ago
- C++ application to perform computer vision tasks using Nvidia Triton Server for model inference☆23Updated last week
- ☆9Updated 11 months ago
- ☆63Updated 2 years ago
- deploy yolox algorithm use deepstream☆89Updated 3 years ago
- This repo provides the C++ implementation of YOLO-NAS based on ONNXRuntime for performing object detection in real-time.Support float32/f…☆43Updated 11 months ago
- Describing How to Enable OpenVINO Execution Provider for ONNX Runtime☆19Updated 4 years ago
- 自然场景检测DBNet网络的tensorrt版本☆22Updated 4 years ago
- tensorrt yolov7 without onnxparser☆24Updated 2 years ago
- ☆15Updated 2 years ago
- Exporting YOLOv5 for CPU inference with ONNX and OpenVINO☆36Updated 7 months ago
- Demos for how to use the shared libs of Lite.AI.ToolKit🚀🚀🌟. (https://github.com/DefTruth/lite.ai.toolkit)☆7Updated 3 years ago
- The replacement of traditional NMS post-processing method in object detection 目标检测中替代传统NMS的后处理方式☆17Updated 3 years ago
- ☆29Updated 3 years ago
- ☆21Updated 2 years ago
- ☆15Updated 2 years ago
- the C++ version of thundernet with ncnn☆14Updated 4 years ago