k9ele7en / Triton-TensorRT-Inference-CRAFT-pytorchLinks
Advanced inference pipeline using NVIDIA Triton Inference Server for CRAFT Text detection (Pytorch), included converter from Pytorch -> ONNX -> TensorRT, Inference pipelines (TensorRT, Triton server - multi-format). Supported model format for Triton inference: TensorRT engine, Torchscript, ONNX
☆33Updated 4 years ago
Alternatives and similar repositories for Triton-TensorRT-Inference-CRAFT-pytorch
Users that are interested in Triton-TensorRT-Inference-CRAFT-pytorch are comparing it to the libraries listed below
Sorting:
- How to inference Yolov5-Face on C++☆11Updated 3 years ago
- deploy yolox algorithm use deepstream☆91Updated 4 years ago
- ☆67Updated 3 years ago
- How to deploy open source models using DeepStream and Triton Inference Server☆86Updated last year
- Wanwu models release, code will be released soon☆24Updated 3 years ago
- Set up CI in DL/ cuda/ cudnn/ TensorRT/ onnx2trt/ onnxruntime/ onnxsim/ Pytorch/ Triton-Inference-Server/ Bazel/ Tesseract/ PaddleOCR/ NV…☆44Updated 2 years ago
- Yet another ssd, with its runtime stack for libtorch, onnx and specialized accelerators.☆26Updated 10 months ago
- A tool convert TensorRT engine/plan to a fake onnx☆42Updated 3 years ago
- End-to-end face detection, cropping, norm estimation, and landmark detection in a single onnx model☆83Updated 3 years ago
- triton server ensemble model demo☆30Updated 3 years ago
- Deep Learning based Face Anti-Spoofing☆55Updated 4 years ago
- tensorrt yolov7 without onnxparser