RomanArzumanyan / VALILinks
Video processing in Python
☆66Updated last month
Alternatives and similar repositories for VALI
Users that are interested in VALI are comparing it to the libraries listed below
Sorting:
- A Toolkit to Help Optimize Onnx Model☆256Updated this week
- nvjpeg for python☆103Updated 2 years ago
- A nvImageCodec library of GPU- and CPU- accelerated codecs featuring a unified interface☆126Updated 3 months ago
- A set of simple tools for splitting, merging, OP deletion, size compression, rewriting attributes and constants, OP generation, change op…☆300Updated last year
- DeepStream Libraries offer CVCUDA, NvImageCodec, and PyNvVideoCodec modules as Python APIs for seamless integration into custom framewor…☆70Updated 2 months ago
- A Toolkit to Help Optimize Large Onnx Model☆162Updated last month
- The ffmpegcv is a ffmpeg backbone for open-cv like Video Reader and Writer☆226Updated 2 months ago
- A toolkit showing GPU's all-round capability in video processing☆193Updated 2 years ago
- ONNX Runtime Inference C++ Example☆253Updated 8 months ago
- C++ Helper Class for Deep Learning Inference Frameworks: TensorFlow Lite, TensorRT, OpenCV, OpenVINO, ncnn, MNN, SNPE, Arm NN, NNabla, ON…☆298Updated 3 years ago
- resize image in (CUDA, python, cupy)☆42Updated 2 years ago
- Script to typecast ONNX model parameters from INT64 to INT32.☆112Updated last year
- A pytorch to tensorrt convert with dynamic shape support☆267Updated last year
- The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.☆139Updated 3 weeks ago
- Porting of Pillow resize method in C++ and OpenCV.☆147Updated 2 years ago
- Count number of parameters / MACs / FLOPS for ONNX models.☆95Updated last year
- Using TensorRT for Inference Model Deployment.☆49Updated last year
- an example of segment-anything infer by ncnn☆125Updated 2 years ago
- Implementation of YOLOv9 QAT optimized for deployment on TensorRT platforms.☆129Updated 7 months ago
- Pythonic Nvidia Codec Library☆16Updated 3 years ago
- A cross-platform High-performance FFmpeg based Real-time Video Frames Decoder in Pure Python 🎞️⚡☆217Updated last year
- Serving Inside Pytorch☆165Updated last week
- Decode JPEG image on GPU using PyTorch☆93Updated 2 years ago
- A shared library of on-demand DeepStream Pipeline Services for Python and C/C++☆331Updated 8 months ago
- A project demonstrating how to use nvmetamux to run multiple models in parallel.☆112Updated last year
- This repository deploys YOLOv4 as an optimized TensorRT engine to Triton Inference Server☆286Updated 3 years ago
- Generate saved_model, tfjs, tf-trt, EdgeTPU, CoreML, quantized tflite, ONNX, OpenVINO, Myriad Inference Engine blob and .pb from .tflite.…☆272Updated 3 years ago
- ☆39Updated 2 years ago
- How to deploy open source models using DeepStream and Triton Inference Server☆86Updated last year
- Advanced inference pipeline using NVIDIA Triton Inference Server for CRAFT Text detection (Pytorch), included converter from Pytorch -> O…☆33Updated 4 years ago