CVCUDA / CV-CUDALinks
CV-CUDA™ is an open-source, GPU accelerated library for cloud-scale image processing and computer vision.
☆2,547Updated 2 months ago
Alternatives and similar repositories for CV-CUDA
Users that are interested in CV-CUDA are comparing it to the libraries listed below
Sorting:
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,818Updated this week
- Simple samples for TensorRT programming☆1,627Updated 2 months ago
- ONNX-TensorRT: TensorRT backend for ONNX☆3,124Updated last week
- OpenMMLab Model Deployment Framework☆3,000Updated 10 months ago
- Simplify your onnx model☆4,121Updated 10 months ago
- A tool to modify ONNX models in a visualization fashion, based on Netron and Flask.☆1,536Updated 5 months ago
- A primitive library for neural network☆1,345Updated 8 months ago
- An easy to use PyTorch to TensorRT converter☆4,783Updated 11 months ago
- OpenMMLab Foundational Library for Training Deep Learning Models☆1,350Updated last month
- PPL Quantization Tool (PPQ) is a powerful offline neural network quantization tool.☆1,714Updated last year
- Set of Python bindings to C++ libraries which provides full HW acceleration for video decoding, encoding and GPU-accelerated color space …☆1,344Updated last year
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper, Ada and Bla…☆2,587Updated this week
- Transformer related optimization, including BERT, GPT☆6,261Updated last year
- CUDA Library Samples☆2,040Updated 2 weeks ago
- ppl.cv is a high-performance image processing library of openPPL supporting various platforms.☆507Updated 9 months ago
- 🛠 A lite C++ AI toolkit: 100+ models with MNN, ORT and TRT, including Det, Seg, Stable-Diffusion, Face-Fusion, etc.🎉☆4,196Updated this week
- Deploy your model with TensorRT quickly.☆768Updated last year
- A unified library of state-of-the-art model optimization techniques like quantization, pruning, distillation, speculative decoding, etc. …☆1,078Updated 2 weeks ago
- AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (N…☆4,662Updated last week
- NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source compone…☆11,912Updated last week
- ☆1,030Updated last year
- Implementation of popular deep learning networks with TensorRT network definition API☆7,455Updated 2 months ago
- High-performance Inference and Deployment Toolkit for LLMs and VLMs based on PaddlePaddle☆3,427Updated this week
- ONNX Optimizer☆735Updated 2 weeks ago
- SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX R…☆2,461Updated last week
- TensorRT C++ API Tutorial☆747Updated 8 months ago
- Examples for using ONNX Runtime for machine learning inferencing.☆1,441Updated this week
- ☆2,503Updated last year
- Sample codes for my CUDA programming book☆1,765Updated 5 months ago
- AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.☆2,383Updated this week