NVIDIA / TensorRT
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
☆11,180Updated 2 weeks ago
Alternatives and similar repositories for TensorRT:
Users that are interested in TensorRT are comparing it to the libraries listed below
- ONNX-TensorRT: TensorRT backend for ONNX☆3,017Updated this week
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,676Updated this week
- An easy to use PyTorch to TensorRT converter☆4,674Updated 6 months ago
- Open standard for machine learning interoperability☆18,432Updated this week
- Implementation of popular deep learning networks with TensorRT network definition API☆7,178Updated 2 months ago
- A GPU-accelerated library containing highly optimized building blocks and an execution engine for data processing to accelerate deep lear…☆5,276Updated this week
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.☆8,742Updated this week
- ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator☆15,625Updated this week
- Simplify your onnx model☆3,976Updated 5 months ago
- A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch☆8,534Updated last week
- CV-CUDA™ is an open-source, GPU accelerated library for cloud-scale image processing and computer vision.☆2,437Updated 2 months ago
- Transformer related optimization, including BERT, GPT☆6,022Updated 10 months ago
- Visualizer for neural network, deep learning and machine learning models☆29,375Updated this week
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆12,024Updated this week
- Development repository for the Triton language and compiler☆14,406Updated this week
- OpenMMLab Model Deployment Framework☆2,847Updated 4 months ago
- An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model c…☆14,119Updated 7 months ago
- CUDA Templates for Linear Algebra Subroutines☆6,210Updated last week
- Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.☆14,385Updated 2 weeks ago
- YOLOX is a high-performance anchor-free YOLO, exceeding yolov3~v5 with MegEngine, ONNX, TensorRT, ncnn, and OpenVINO supported. Documenta…☆9,635Updated 2 months ago
- Serve, optimize and scale PyTorch models in production☆4,290Updated this week
- Samples for CUDA Developers which demonstrates features in CUDA Toolkit☆6,914Updated this week
- ncnn is a high-performance neural network inference framework optimized for the mobile platform☆20,939Updated this week
- OpenVINO™ is an open source toolkit for optimizing and deploying AI inference☆7,808Updated this week
- Optimized primitives for collective multi-GPU communication☆3,463Updated 3 weeks ago
- Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX☆2,368Updated 2 weeks ago
- AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (N…☆4,596Updated 2 months ago
- ☆10,900Updated 2 months ago
- End-to-End Object Detection with Transformers☆13,972Updated 11 months ago
- OpenMMLab Computer Vision Foundation☆6,011Updated last week