triton-inference-server / paddlepaddle_backendLinks
☆35Updated last year
Alternatives and similar repositories for paddlepaddle_backend
Users that are interested in paddlepaddle_backend are comparing it to the libraries listed below
Sorting:
- The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.☆133Updated 2 weeks ago
- Compare multiple optimization methods on triton to imporve model service performance☆50Updated last year
- Serving Inside Pytorch☆160Updated 3 weeks ago
- Triton Inferece Server Model Config and Client Scripts☆32Updated 3 years ago
- PaddlePaddle custom device implementaion. (『飞桨』自定义硬件接入实现)☆84Updated this week
- PaddlePaddle Developer Community☆111Updated this week
- The Triton backend for TensorRT.☆76Updated 3 weeks ago
- Common source, scripts and utilities for creating Triton backends.☆324Updated 3 weeks ago
- OneFlow->ONNX☆43Updated 2 years ago
- Common source, scripts and utilities shared across all Triton repositories.☆72Updated 3 weeks ago
- ☆259Updated last week
- Large Language Model Onnx Inference Framework☆35Updated 4 months ago
- ☆78Updated 2 months ago
- The Triton backend for the ONNX Runtime.☆148Updated 3 weeks ago
- ☢️ TensorRT 2023复赛——基于TensorRT-LLM的Llama模型推断加速优化☆48Updated last year
- TensorRT Plugin Autogen Tool☆369Updated 2 years ago
- Simple Dynamic Batching Inference☆145Updated 3 years ago
- A Toolkit to Help Optimize Large Onnx Model☆158Updated last year
- A simple tool that can generate TensorRT plugin code quickly.☆231Updated last year
- Transformer related optimization, including BERT, GPT☆17Updated last year
- ☆127Updated 5 months ago
- Transformer related optimization, including BERT, GPT☆39Updated 2 years ago
- YOLO v5 Object Detection on Triton Inference Server☆15Updated 2 years ago
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆476Updated last month
- ☆31Updated 2 years ago
- ☆138Updated last year
- NVIDIA TensorRT Hackathon 2023复赛选题:通义千问Qwen-7B用TensorRT-LLM模型搭建及优化☆42Updated last year
- ☆58Updated 6 months ago
- Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.☆202Updated last month
- This repository deploys YOLOv4 as an optimized TensorRT engine to Triton Inference Server☆285Updated 3 years ago