triton-inference-server / paddlepaddle_backendLinks
☆36Updated last year
Alternatives and similar repositories for paddlepaddle_backend
Users that are interested in paddlepaddle_backend are comparing it to the libraries listed below
Sorting:
- Serving Inside Pytorch☆165Updated 2 weeks ago
- Compare multiple optimization methods on triton to imporve model service performance☆52Updated last year
- The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.☆139Updated 3 weeks ago
- The Triton backend for TensorRT.☆79Updated 3 weeks ago
- ☢️ TensorRT 2023复赛——基于TensorRT-LLM的Llama模型推断加速优化☆50Updated 2 years ago
- Common source, scripts and utilities for creating Triton backends.☆361Updated 3 weeks ago
- ☆268Updated 2 weeks ago
- NVIDIA TensorRT Hackathon 2023复赛选题:通义千问Qwen-7B用TensorRT-LLM模型搭建及优化☆43Updated 2 years ago
- A high performance, high expansion, easy to use framework for AI application. 为AI应用的开发者提供一套统一的高性能、易用的编程框架,快速基于AI全栈服务、开发跨端边云的AI行业应用,支持GPU,…☆160Updated last year
- YOLO v5 Object Detection on Triton Inference Server☆16Updated 2 years ago
- TensorRT Plugin Autogen Tool☆368Updated 2 years ago
- Paddle Large Scale Classification Tools,supports ArcFace, CosFace, PartialFC, Data Parallel + Model Parallel. Model includes ResNet, ViT,…☆155Updated 2 years ago
- Large Language Model Onnx Inference Framework☆36Updated last week
- ☆80Updated last month
- PaddlePaddle Developer Community☆127Updated last week
- Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.☆665Updated last week
- OneFlow->ONNX☆43Updated 2 years ago
- A Toolkit to Help Optimize Large Onnx Model☆162Updated last month
- ☆102Updated 4 years ago
- Triton Inferece Server Model Config and Client Scripts☆32Updated 3 years ago
- Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.☆213Updated 7 months ago
- PaddlePaddle custom device implementaion. (『飞桨』自定义硬件接入实现)☆100Updated last week
- The Triton backend for the ONNX Runtime.☆168Updated this week
- triton server ensemble model demo☆30Updated 3 years ago
- ☆120Updated 2 years ago
- This repository deploys YOLOv4 as an optimized TensorRT engine to Triton Inference Server☆286Updated 3 years ago
- The Triton backend for TensorFlow.☆55Updated 2 weeks ago
- Common utilities for ONNX converters☆287Updated 3 months ago
- ☆26Updated 2 years ago
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆499Updated last week