triton-inference-server / dali_backendView external linksLinks
The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.
☆140Feb 6, 2026Updated last week
Alternatives and similar repositories for dali_backend
Users that are interested in dali_backend are comparing it to the libraries listed below
Sorting:
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆504Feb 3, 2026Updated last week
- Triton backend that enables pre-process, post-processing and other logic to be implemented in Python.☆667Feb 7, 2026Updated last week
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.☆10,334Feb 6, 2026Updated last week
- Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.☆677Feb 6, 2026Updated last week
- This repository deploys YOLOv4 as an optimized TensorRT engine to Triton Inference Server☆284Jun 2, 2022Updated 3 years ago
- Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.☆217Feb 3, 2026Updated last week
- Pilgrim Project: torch2trt, quick convert your pytorch model to TensorRT engine.☆19Oct 10, 2020Updated 5 years ago
- The Triton backend for the PyTorch TorchScript models.☆173Feb 5, 2026Updated last week
- This repository contains tutorials and examples for Triton Inference Server☆819Feb 4, 2026Updated last week
- ☆329Updated this week
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,944Updated this week
- 训练速度比原始caffe-ssd提升4~6倍☆10Jun 22, 2021Updated 4 years ago
- ☆18Nov 11, 2025Updated 3 months ago
- Manifest files for CE container packages☆13Oct 14, 2024Updated last year
- TF 2 implementation Learning to Resize Images for Computer Vision Tasks (https://arxiv.org/abs/2103.09950v1).☆53Oct 12, 2021Updated 4 years ago
- ☆33Jul 7, 2022Updated 3 years ago
- Triton Migration Guide for DeepStreamSDK.☆15Dec 19, 2023Updated 2 years ago
- The Triton backend for the ONNX Runtime.☆173Updated this week
- Neural Image Assessment, a tool to automatically inspect quality of images.☆12Mar 1, 2022Updated 3 years ago
- ☆14Jun 12, 2015Updated 10 years ago
- 大模型API性能指标比较 - 深入分析TTFT、TPS等关键指标☆20Sep 12, 2024Updated last year
- Deploy RT-EDTR with onnx from paddlepaddle framwork and graph cut☆31May 5, 2023Updated 2 years ago
- Demonstration of the use of TensorRT and TRITON☆16Feb 9, 2021Updated 5 years ago
- Triton CLI is an open source command line interface that enables users to create, deploy, and profile models served by the Triton Inferen…☆73Updated this week
- ☆134Updated this week
- DeepStream SDK Python bindings and sample applications☆1,782Oct 14, 2025Updated 4 months ago
- Transformer related optimization, including BERT, GPT☆39Feb 10, 2023Updated 3 years ago
- The core library and APIs implementing the Triton Inference Server.☆164Feb 4, 2026Updated last week
- Tencent Distribution of TVM☆15Apr 7, 2023Updated 2 years ago
- CV-CUDA™ is an open-source, GPU accelerated library for cloud-scale image processing and computer vision.☆2,650Jan 22, 2026Updated 3 weeks ago
- This project provides a face recoganization system via opencv4☆18Jan 16, 2019Updated 7 years ago
- TensorRT Plugin Autogen Tool☆366Apr 7, 2023Updated 2 years ago
- Common source, scripts and utilities shared across all Triton repositories.☆79Updated this week
- ☆413Nov 11, 2023Updated 2 years ago
- ☆22Jun 30, 2021Updated 4 years ago
- [SIGGRAPHASIA2025] InfiniHuman: Infinite 3D Human Creation with Precise Control☆84Oct 14, 2025Updated 4 months ago
- Retrained SSD-resnet50 model to detect multiple fashion items☆18Feb 26, 2019Updated 6 years ago
- ByteTrack for DeepStream 6.4☆23Mar 11, 2024Updated last year
- CUDA 12.2 HMM demos☆20Jul 26, 2024Updated last year