autoexpect / rknn_ffmpeg_tutorial
ffmpeg->rockchip mpp decoding->rknpu rknn->opencv opengl rendering
☆33Updated 2 years ago
Related projects ⓘ
Alternatives and complementary repositories for rknn_ffmpeg_tutorial
- gstreamer rtsp client support rockchip and jetson nx for C/C++ Python☆59Updated 10 months ago
- ☆68Updated 3 months ago
- rknn inference☆42Updated 2 years ago
- 在瑞芯微rockchip的AI芯片rv1109上,利用rknn和opencv库,修改了官方yolov3后处理部分代码Bug,交叉编译yolov3-demo示例后可成功上板部署运行。☆33Updated 3 years ago
- mpp+h264+rga+live555++opencv+rtsp☆13Updated 4 years ago
- The rknn2 API uses the secondary encapsulation of the process, which is easy for everyone to call. It is applicable to rk356x rk3588☆44Updated 2 years ago
- h264的软解和硬解,基于FFmpeg和MPP☆11Updated 2 years ago
- ☆16Updated 3 years ago
- RKNN version demo of [CVPR21] LightTrack: Finding Lightweight Neural Network for Object Tracking via One-Shot Architecture Search☆18Updated 2 years ago
- 在rockchip3588上实现用ffmpeg进行推拉流,其中推拉流使用硬件加速编解码☆53Updated last year
- 多路rtsp硬解码☆14Updated 10 months ago
- 海思nnie跑yolov5☆26Updated 2 years ago
- yolov5s nnie☆45Updated 3 years ago
- ☆22Updated 2 years ago
- yolov10 瑞芯微 rknn 板端 C++部署,使用平台 rk3588。☆61Updated 4 months ago
- 启动多线程, relu激活, 3588的yolo部署, 帧率150以上.☆17Updated last year
- yolov7 部署版本,后处理用python语言和C++语言形式进行改写,便于移植不同平台(caffe、onnx、tensorRT、RKNN、Horzion)。☆31Updated last year
- ☆32Updated last year
- 封装Jetson Multimedia API的编解码库,基于https://github.com/jocover/jetson-ffmpeg基础进行的修改,未集成于ffmpeg,可单独使用。☆35Updated 2 years ago
- yolov7-tensorrtx☆36Updated 2 years ago
- ☆19Updated 2 years ago
- 学习yolo算法和rknn框架☆34Updated 3 years ago
- simple yolov5 rtspserver for rk3588☆26Updated 2 months ago
- RKNN模型推理部署模板☆18Updated last year
- 基于海思3519的YOLOv3例程☆21Updated 3 years ago
- UNetMultiLane 多车道线和车道线类型识别部署版本,测试不同平台部署(onnx、tensorRT、RKNN、Horzion),可识别所在的车道和车道线的类型。☆22Updated 4 months ago
- ☆76Updated 3 weeks ago
- a plugin-oriented framework for video structured. 国产程序员请加微信zhzhi78拉群交流。☆18Updated 5 months ago
- rknn-3588部署yolov5,利用线程池实现npu推理加速;Deploying YOLOv5 on RKNN-3588, utilizing a thread pool to achieve NPU inference acceleration.☆48Updated 2 months ago
- ☆22Updated 3 years ago