RomanArzumanyan / VALI
Video processing in Python
☆58Updated 2 weeks ago
Alternatives and similar repositories for VALI:
Users that are interested in VALI are comparing it to the libraries listed below
- A nvImageCodec library of GPU- and CPU- accelerated codecs featuring a unified interface☆97Updated last month
- nvjpeg for python☆98Updated 2 years ago
- A set of simple tools for splitting, merging, OP deletion, size compression, rewriting attributes and constants, OP generation, change op…☆290Updated last year
- DeepStream Libraries offer CVCUDA, NvImageCodec, and PyNvVideoCodec modules as Python APIs for seamless integration into custom framewor…☆50Updated 6 months ago
- C++ Helper Class for Deep Learning Inference Frameworks: TensorFlow Lite, TensorRT, OpenCV, OpenVINO, ncnn, MNN, SNPE, Arm NN, NNabla, ON…☆287Updated 3 years ago
- Implementation of YOLOv9 QAT optimized for deployment on TensorRT platforms.☆107Updated 2 months ago
- A Toolkit to Help Optimize Large Onnx Model☆153Updated 11 months ago
- A Toolkit to Help Optimize Onnx Model☆140Updated this week
- Script to typecast ONNX model parameters from INT64 to INT32.☆106Updated 11 months ago
- The ffmpegcv is a ffmpeg backbone for open-cv like Video Reader and Writer☆200Updated last week
- Count number of parameters / MACs / FLOPS for ONNX models.☆91Updated 6 months ago
- A pytorch to tensorrt convert with dynamic shape support☆260Updated last year
- resize image in (CUDA, python, cupy)☆41Updated last year
- ONNX Runtime Inference C++ Example☆235Updated 3 weeks ago
- A project demonstrating how to use nvmetamux to run multiple models in parallel.☆99Updated 6 months ago
- Useful tensorrt plugin. For pytorch and mmdetection model conversion.☆165Updated 6 months ago
- This repository provides optical character detection and recognition solution optimized on Nvidia devices.☆74Updated 2 weeks ago
- Pythonic Nvidia Codec Library☆15Updated 2 years ago
- Using TensorRT for Inference Model Deployment.☆48Updated last year
- How to deploy open source models using DeepStream and Triton Inference Server☆79Updated 9 months ago
- This repository deploys YOLOv4 as an optimized TensorRT engine to Triton Inference Server☆283Updated 2 years ago
- an example of segment-anything infer by ncnn☆121Updated last year
- This project aims to explore the deployment of Swin-Transformer based on TensorRT, including the test results of FP16 and INT8.☆166Updated 2 years ago
- A C++ codebase implementation of Towards-Realtime-MOT☆127Updated 3 years ago
- This repo provides the C++ implementation of YOLO-NAS based on ONNXRuntime for performing object detection in real-time.Support float32/f…☆43Updated last year
- Utility scripts for editing or modifying onnx models. Utility scripts to summarize onnx model files along with visualization for loop ope…☆79Updated 3 years ago
- ☆34Updated last year
- The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.☆132Updated this week
- Decode JPEG image on GPU using PyTorch☆90Updated last year
- TensorRT plugin for 3-dimension grid sample operator☆33Updated 2 months ago