Huntersdeng / CXX-DeepLearning-Inference
A unified and extensible pipeline for deep learning model inference with C++. Now support yolov8, yolov9, clip, and nanosam. More models will be updated soon.
☆11Updated 11 months ago
Alternatives and similar repositories for CXX-DeepLearning-Inference:
Users that are interested in CXX-DeepLearning-Inference are comparing it to the libraries listed below
- C++ TensorRT Implementation of NanoSAM☆37Updated last year
- pose estimation code with deepstream and yolo-pose☆13Updated 2 years ago
- FastSAM 部署版本,便于移植不同平,部署简单、运行速度快。☆18Updated 10 months ago
- yolov11(yolov8)尝试了7种不同的部署方法,并对每种方法的优势进行了简单总结。不同的平台、不同的时耗或CPU占用需求,总有一种方法是适用的。针对想入门部署的也是一个很好的参考学习资料。☆18Updated 2 months ago
- RT-DETRv2 tensorrt C++ 部署☆15Updated 5 months ago
- segmentation algorithm yolact use tensorrt deploy☆14Updated 2 years ago
- NanoTrack(@HonglinChu), C++ TensorRT deployment. MAX 250 FPS!☆23Updated last year
- ☆20Updated 2 years ago
- yolov8 旋转目标检测部署,瑞芯微RKNN芯片部署、地平线Horizon芯片部署、TensorRT部署☆27Updated 10 months ago
- yolov11 的tensorRT C++ 部署,后处理使用cuda实现比较耗时的操作。☆32Updated 3 months ago
- This project provides simple code and demonstrates how to use the TensorRT C++ API and ONNX to deploy PaddleOCR text recognition model.☆41Updated 2 years ago
- based on the yolov8,provide pt-onnx-tensorrt transcode and infer code by c++☆59Updated 2 years ago
- tensorrt sahi yolo 目标检测☆40Updated this week
- yolov8obb 旋转目标检测部署rknn的C++代码☆17Updated 9 months ago
- 高效部署:YOLO X, V3, V4, V5, V6, V7, V8, EdgeYOLO TRT推理 ™️ ,前后处理均由CUDA核函数实现 CPP/CUDA🚀☆49Updated 2 years ago
- learn TensorRT from scratch🥰☆13Updated 6 months ago
- ffmpeg+cuvid+tensorrt+multicamera☆12Updated 3 months ago
- 分别使用OpenCV、ONNXRuntime部署CenterNet目标检测,包含C++和Python两个版本的程序☆9Updated 2 years ago
- Speed up image preprocess with cuda when handle image or tensorrt inference☆65Updated 3 weeks ago
- 对 tensorRT_Pro 开源项目理解☆20Updated 2 years ago
- 用OpenVINO对yolov8导出的onnx模型进行C++的推理, 任务包括图像分类, 目标识别和语义分割, 步骤包括图片前处理, 推理, NMS等☆62Updated 11 months ago
- 本仓库在OpenVINO推理框架下部署Nanodet检测算法,并重写预处理和后处理部分,具有超高性能!让你在Intel CPU平台上的检测速度起飞! 并基于NNCF和PPQ工具将模型量化(PTQ)至int8精度,推理速度更快!☆15Updated last year
- DETR tensor去除推理过程无用辅助头+fp16部署再次加速+解决转tensorrt 输出全为0问题的新方法。☆12Updated last year
- 使用ONNXRuntime部署Detic检测2万1千种类别的物体,包含C++和Python两个版本的程序☆17Updated last year
- TensorRT实现YOLOX部署☆13Updated 2 years ago
- yolov8seg 瑞芯微 rknn 板端 C++部署,使用平台 rk3588。☆24Updated 11 months ago
- 跟着Tensorrt_pro学习各种知识☆39Updated 2 years ago
- Quantize yolov5 using pytorch_quantization.🚀🚀🚀☆13Updated last year
- ☆18Updated 2 years ago
- Quantize yolov7 using pytorch_quantization.🚀🚀🚀☆10Updated last year