riaqn / python-nvidia-codec
Pythonic Nvidia Codec Library
β15Updated 2 years ago
Related projects β
Alternatives and complementary repositories for python-nvidia-codec
- Video processing in Pythonβ43Updated last week
- A cross-platform High-performance FFmpeg based Real-time Video Frames Decoder in Pure Python ποΈβ‘β184Updated 4 months ago
- The ffmpegcv is a ffmpeg backbone for open-cv like Video Reader and Writerβ167Updated 2 months ago
- nvjpeg for pythonβ93Updated last year
- The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.β125Updated 2 weeks ago
- This repository serves as an example of deploying the YOLO models on Triton Server for performance and testing purposesβ43Updated 5 months ago
- Deploy SCRFD, an efficient high accuracy face detection approach, in your web browser with ncnn and webassemblyβ52Updated last year
- A nvImageCodec library of GPU- and CPU- accelerated codecs featuring a unified interfaceβ80Updated last week
- Home of Intel(R) Deep Learning Streamer Pipeline Server (formerly Video Analytics Serving)β125Updated last year
- β115Updated 4 years ago
- A project demonstrating how to use nvmetamux to run multiple models in parallel.β95Updated last month
- DeGirum PySDK Usage Examplesβ19Updated 3 weeks ago
- an example of segment-anything infer by ncnnβ120Updated last year
- This repository provides optical character detection and recognition solution optimized on Nvidia devices.β55Updated last month
- TAO Toolkit deep learning networks with TensorFlow 1.x backendβ11Updated 9 months ago
- NVIDIA Jetson amd Deepstream Python Examplesβ28Updated 3 years ago
- DeepStream Libraries offer CVCUDA, NvImageCodec, and PyNvVideoCodec modules as Python APIs for seamless integration into custom frameworβ¦β25Updated last month
- How to deploy open source models using DeepStream and Triton Inference Serverβ74Updated 4 months ago
- Hardware Accelerated Pytorch Container with (also accelerated) ffmpeg & OpenCV 4β22Updated last year
- Demonstration of the use of TensorRT and TRITONβ17Updated 3 years ago
- A very simple tool that compresses the overall size of the ONNX model by aggregating duplicate constant values as much as possible.β52Updated 2 years ago
- Advanced inference pipeline using NVIDIA Triton Inference Server for CRAFT Text detection (Pytorch), included converter from Pytorch -> Oβ¦β32Updated 3 years ago
- YOLOv7 training. Generates a head-only dataset in YOLO format. The labels included in the CrowdHuman dataset are Head and FullBody, but iβ¦β28Updated 5 months ago
- Model compression for ONNXβ75Updated this week
- Implementation of End-to-End YOLO Models for DeepStreamβ36Updated 2 weeks ago
- Python Object Detection Insightsβ192Updated last year
- This repository deploys YOLOv4 as an optimized TensorRT engine to Triton Inference Serverβ279Updated 2 years ago
- A Toolkit to Help Optimize Onnx Modelβ81Updated this week
- A shared library of on-demand DeepStream Pipeline Services for Python and C/C++β288Updated this week