CV-CUDA™ is an open-source, GPU accelerated library for cloud-scale image processing and computer vision.
☆2,679Mar 31, 2026Updated last month
Alternatives and similar repositories for CV-CUDA
Users that are interested in CV-CUDA are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ppl.cv is a high-performance image processing library of openPPL supporting various platforms.☆515Oct 30, 2024Updated last year
- NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source compone…☆12,947Apr 13, 2026Updated 3 weeks ago
- ONNX-TensorRT: TensorRT backend for ONNX☆3,204Mar 25, 2026Updated last month
- Set of Python bindings to C++ libraries which provides full HW acceleration for video decoding, encoding and GPU-accelerated color space …☆1,379Jun 10, 2024Updated last year
- Implementation of popular deep learning networks with TensorRT network definition API☆7,765Mar 7, 2026Updated last month
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,966Updated this week
- Simple samples for TensorRT programming☆1,652Updated this week
- A simple tool that can generate TensorRT plugin code quickly.☆240Jul 11, 2023Updated 2 years ago
- A primitive library for neural network☆1,368Nov 24, 2024Updated last year
- AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (N…☆4,717Apr 9, 2026Updated 3 weeks ago
- PPL Quantization Tool (PPQ) is a powerful offline neural network quantization tool.☆1,795Mar 28, 2024Updated 2 years ago
- A GPU-accelerated library containing highly optimized building blocks and an execution engine for data processing to accelerate deep lear…☆5,684Updated this week
- Simplify your onnx model☆4,331Apr 29, 2026Updated last week
- C++ library based on tensorrt integration☆2,875May 24, 2023Updated 2 years ago
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- An easy to use PyTorch to TensorRT converter☆4,865Aug 17, 2024Updated last year
- Transformer related optimization, including BERT, GPT☆6,415Mar 27, 2024Updated 2 years ago
- Decode JPEG image on GPU using PyTorch☆93Oct 9, 2023Updated 2 years ago
- A project demonstrating Lidar related AI solutions, including three GPU accelerated Lidar/camera DL networks (PointPillars, CenterPoint, …☆1,795Mar 10, 2026Updated last month
- OpenMMLab Model Deployment Framework☆3,118Sep 30, 2024Updated last year
- Deploy your model with TensorRT quickly.☆764Nov 21, 2023Updated 2 years ago
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,312Updated this week
- Lightning fast C++/CUDA neural network framework☆4,473Apr 21, 2026Updated 2 weeks ago
- YOLOv3、YOLOv4、YOLOv5、YOLOv5-Lite、YOLOv6-v1、YOLOv6-v2、YOLOv7、YOLOX、YOLOX-Lite、PP-YOLOE、PP-PicoDet-Plus、YOLO-Fastest v2、FastestDet、YOLOv5-S…☆766Oct 25, 2022Updated 3 years ago
- Serverless GPU API endpoints on Runpod - Get Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.☆10,625Apr 29, 2026Updated last week
- High-performance Inference and Deployment Toolkit for LLMs and VLMs based on PaddlePaddle☆3,679Apr 29, 2026Updated last week
- 🛠A lite C++ AI toolkit: 100+ models with MNN, ORT and TRT, including Det, Seg, Stable-Diffusion, Face-Fusion, etc.🎉☆4,406Mar 19, 2026Updated last month
- CUDA Library Samples☆2,385Apr 20, 2026Updated 2 weeks ago
- YOLOX is a high-performance anchor-free YOLO, exceeding yolov3~v5 with MegEngine, ONNX, TensorRT, ncnn, and OpenVINO supported. Documenta…☆10,449Jun 8, 2025Updated 10 months ago
- TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizat…☆13,545Updated this week
- CUDA Templates and Python DSLs for High-Performance Linear Algebra☆9,663Apr 25, 2026Updated last week
- TensorRT Plugin Autogen Tool☆368Apr 7, 2023Updated 3 years ago
- YOLOv6: a single-stage object detection framework dedicated to industrial applications.☆5,885Aug 7, 2024Updated last year
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- A new tensorrt integrate. Easy to integrate many tasks☆454Apr 2, 2023Updated 3 years ago
- A tool to modify ONNX models in a visualization fashion, based on Netron and Flask.☆1,623Nov 19, 2025Updated 5 months ago
- MegCC是一个运行时超轻量,高效,移植简单的深度学习模型编译器☆483Oct 23, 2024Updated last year
- Development repository for the Triton language and compiler☆19,087Updated this week
- Samples for CUDA Developers which demonstrates features in CUDA Toolkit☆9,131Mar 30, 2026Updated last month
- TNN: developed by Tencent Youtu Lab and Guangying Lab, a uniform deep learning inference framework for mobile、desktop and server. TNN is …☆4,631May 9, 2025Updated 11 months ago
- ncnn is a high-performance neural network inference framework optimized for the mobile platform☆23,178Apr 22, 2026Updated 2 weeks ago