CVCUDA / CV-CUDA
CV-CUDA™ is an open-source, GPU accelerated library for cloud-scale image processing and computer vision.
☆2,463Updated 2 weeks ago
Alternatives and similar repositories for CV-CUDA:
Users that are interested in CV-CUDA are comparing it to the libraries listed below
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,705Updated this week
- ONNX-TensorRT: TensorRT backend for ONNX☆3,038Updated last week
- Simple samples for TensorRT programming☆1,586Updated last week
- OpenMMLab Model Deployment Framework☆2,880Updated 5 months ago
- An easy to use PyTorch to TensorRT converter☆4,691Updated 7 months ago
- Simplify your onnx model☆3,997Updated 6 months ago
- A primitive library for neural network☆1,321Updated 3 months ago
- NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source compone…☆11,324Updated last week
- Set of Python bindings to C++ libraries which provides full HW acceleration for video decoding, encoding and GPU-accelerated color space …☆1,325Updated 9 months ago
- ppl.cv is a high-performance image processing library of openPPL supporting various platforms.☆500Updated 4 months ago
- Deploy your model with TensorRT quickly.☆764Updated last year
- A tool to modify ONNX models in a visualization fashion, based on Netron and Flask.☆1,444Updated 3 weeks ago
- Implementation of popular deep learning networks with TensorRT network definition API☆7,243Updated 3 months ago
- nndeploy is an end-to-end model inference and deployment framework. It aims to provide users with a powerful, easy-to-use, high-performan…☆710Updated this week
- CUDA Library Samples☆1,820Updated last week
- C++ library based on tensorrt integration☆2,706Updated last year
- PPL Quantization Tool (PPQ) is a powerful offline neural network quantization tool.☆1,654Updated 11 months ago
- 🔥🔥🔥🔥 (Earlier YOLOv7 not official one) YOLO with Transformers and Instance Segmentation, with TensorRT acceleration! 🔥🔥🔥☆3,124Updated last year
- TensorRT C++ API Tutorial☆678Updated 4 months ago
- YOLOv8 using TensorRT accelerate !☆1,525Updated last week
- ☆1,017Updated last year
- Actively maintained ONNX Optimizer☆678Updated last month
- yolort is a runtime stack for yolov5 on specialized accelerators such as tensorrt, libtorch, onnxruntime, tvm and ncnn.☆729Updated last month
- DeepStream SDK Python bindings and sample applications☆1,572Updated 5 months ago
- A unified library of state-of-the-art model optimization techniques such as quantization, pruning, distillation, speculative decoding, et…☆786Updated this week
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.☆8,893Updated this week
- C++/CUDA/Python multimedia utilities for NVIDIA Jetson☆780Updated 5 months ago
- TensorRT Plugin Autogen Tool☆369Updated last year
- AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.☆2,249Updated this week
- A distilled Segment Anything (SAM) model capable of running real-time with NVIDIA TensorRT☆725Updated last year