CVCUDA / CV-CUDA
CV-CUDA™ is an open-source, GPU accelerated library for cloud-scale image processing and computer vision.
☆2,496Updated 2 weeks ago
Alternatives and similar repositories for CV-CUDA
Users that are interested in CV-CUDA are comparing it to the libraries listed below
Sorting:
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,749Updated this week
- ONNX-TensorRT: TensorRT backend for ONNX☆3,072Updated this week
- Simple samples for TensorRT programming☆1,598Updated last month
- A primitive library for neural network☆1,337Updated 5 months ago
- Lightning fast C++/CUDA neural network framework☆4,014Updated 2 weeks ago
- Simplify your onnx model☆4,074Updated 8 months ago
- Set of Python bindings to C++ libraries which provides full HW acceleration for video decoding, encoding and GPU-accelerated color space …☆1,334Updated 11 months ago
- OpenMMLab Model Deployment Framework☆2,943Updated 7 months ago
- A tool to modify ONNX models in a visualization fashion, based on Netron and Flask.☆1,492Updated 2 months ago
- An easy to use PyTorch to TensorRT converter☆4,736Updated 9 months ago
- ppl.cv is a high-performance image processing library of openPPL supporting various platforms.☆503Updated 6 months ago
- Sample codes for my CUDA programming book☆1,712Updated 3 months ago
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper, Ada and Bla…☆2,412Updated this week
- NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source compone…☆11,575Updated last week
- Deploy your model with TensorRT quickly.☆769Updated last year
- CUDA Library Samples☆1,924Updated this week
- PPL Quantization Tool (PPQ) is a powerful offline neural network quantization tool.☆1,687Updated last year
- TensorRT C++ API Tutorial☆707Updated 6 months ago
- AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (N…☆4,633Updated last month
- Transformer related optimization, including BERT, GPT☆6,152Updated last year
- ☆1,024Updated last year
- nvidia-modelopt is a unified library of state-of-the-art model optimization techniques like quantization, pruning, distillation, speculat…☆922Updated this week
- Easy-to-use, high-performance, multi-platform inference deployment framework☆764Updated this week
- ☆2,433Updated last year
- A GPU-accelerated library containing highly optimized building blocks and an execution engine for data processing to accelerate deep lear…☆5,401Updated this week
- how to optimize some algorithm in cuda.☆2,162Updated this week
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.☆9,200Updated this week
- detrex is a research platform for DETR-based object detection, segmentation, pose estimation and other visual recognition tasks.☆2,153Updated 9 months ago
- Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.☆620Updated this week
- ONNX Optimizer☆707Updated 2 weeks ago