zhg-SZPT / FastSAM_Awsome_OpenvinoLinks
"FastSAM_Awsome_Openvino" 项目展示了如何通过 OpenVINO 框架高效部署 FastSAM 模型,实现了令人瞩目的实例分割功能。该项目提供了 C++ 版本和 Python 版本两种实现,为开发者提供了在不同语言环境下使用 FastSAM 模型的选择。
☆35Updated last year
Alternatives and similar repositories for FastSAM_Awsome_Openvino
Users that are interested in FastSAM_Awsome_Openvino are comparing it to the libraries listed below
Sorting:
- This is the code to implement Segment Anything (SAM) using TensorRT(C++).☆41Updated last year
- 在Jetson AGX Xavier上部署yolov8-seg检测分割模型(带自适应低光照补偿)☆49Updated 3 months ago
- YOLOv8 C++ DET、SEG、POSE TENSORRT 推理库,便于学习开发拓展与工作中实际部署☆18Updated last year
- 用OpenVINO对yolov8导出的onnx模型进行C++的推理, 任务包括图像分类, 目标识别和语义分割, 步骤包括图片前处理, 推理, NMS等☆64Updated last year
- tensorrt sahi yolo 目标检测☆54Updated last week
- Using OnnxRuntime to inference yolov10,yolov10+SAM ,yolov10+bytetrack , SAM2 and paddleOCR by c++ .☆121Updated 3 weeks ago
- 对 tensorRT_Pro 开源项目理解☆21Updated 2 years ago
- SPEED SAM C++ TENSORRT: A high-performance C++ implementation using TensorRT and CUDA☆32Updated 6 months ago
- 这是一个使用opencv读取视频并使用socket进行传输视频画面的脚本文件,相较于调用ffmpeg传输节约了90%的数据量☆11Updated last year
- RT-DETRv2 tensorrt C++ 部署☆17Updated 7 months ago
- yolov8 旋转目标检测部署,瑞芯微RKNN芯片部署、地平线Horizon芯片部署、TensorRT部署☆26Updated last year
- YOLOv8 Inference C++ sample code based on OpenVINO C++ API☆43Updated 2 years ago
- yolov8n 部署版,基于官方的导出onnx脚本导出onnx模型,在不同平台上进行部署测试,便于移植不同平台(onnx、tensorRT、rknn、Horizon)。☆38Updated 2 years ago
- 使用TensorRT加速YOLOv8-Seg,完整的后端框架,包括Http服务器,Mysql数据库,ffmpeg视频推流等。☆83Updated last year
- deploy the YOLO model on C++ with Opencv4.8 or onnxruntime☆46Updated 3 months ago
- Speed up image preprocess with cuda when handle image or tensorrt inference☆68Updated 3 weeks ago
- This project provides simple code and demonstrates how to use the TensorRT C++ API and ONNX to deploy PaddleOCR text recognition model.☆44Updated 2 years ago
- yolov11 的tensorRT C++ 部署,后处理使用cuda实现比较耗时的操作。☆37Updated 5 months ago
- ☆20Updated last year
- ☆18Updated 2 years ago
- yolov8obb 旋转目标检测部署rknn的C++代码☆17Updated 10 months ago
- C++ application to perform computer vision tasks using Nvidia Triton Server for model inference☆23Updated last month
- FastSAM 部署版本,便于移植不同平,部署简单、运行速度快。☆18Updated last year
- simplest yolov8 segment onnx model infer in cpp using onnxruntime and opencv dnn net☆34Updated last year
- NanoTrack(@HonglinChu), C++ TensorRT deployment. MAX 250 FPS!☆24Updated last year
- 🚀🚀🚀This is an AI high-performance reasoning C++ library, Currently supports the deployment of yolov5, yolov7, yolov7-pose, yolov8, yol…☆128Updated last year
- based on the yolov8,provide pt-onnx-tensorrt transcode and infer code by c++☆59Updated 2 years ago
- 使用onnxruntime部署GroundingDINO开放世界目标检测,包含C++和Python两个版本的程序☆65Updated last year
- ☆36Updated 7 months ago
- 跟着Tensorrt_pro学习各种知识☆39Updated 2 years ago