☆143Apr 28, 2026Updated last week
Alternatives and similar repositories for perf_analyzer
Users that are interested in perf_analyzer are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Model Express is a Rust-based component meant to be placed next to existing model inference systems to speed up their startup times and i…☆56Updated this week
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆510Apr 28, 2026Updated last week
- AIPerf is a comprehensive benchmarking tool that measures the performance of generative AI models served by your preferred inference solu…☆253Apr 30, 2026Updated last week
- Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.☆688Updated this week
- LLMPerf is a library for validating and benchmarking LLMs☆1,111Dec 9, 2024Updated last year
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- This repository contains tutorials and examples for Triton Inference Server☆832Apr 21, 2026Updated 2 weeks ago
- The Triton TensorRT-LLM Backend☆934Updated this week
- ffmpeg+cuvid+tensorrt+multicamera☆12Dec 31, 2024Updated last year
- Stable Diffusion in TensorRT 8.5+☆15Mar 19, 2023Updated 3 years ago
- PyTriton is a Flask/FastAPI-like interface that simplifies Triton's deployment in Python environments.☆844Aug 13, 2025Updated 8 months ago
- A Datacenter Scale Distributed Inference Serving Framework☆6,701Apr 30, 2026Updated last week
- GenAI inference performance benchmarking tool☆180Updated this week
- Common source, scripts and utilities for creating Triton backends.☆369Apr 13, 2026Updated 3 weeks ago
- NVIDIA Inference Xfer Library (NIXL)☆1,011Apr 30, 2026Updated last week
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.☆10,625Apr 29, 2026Updated last week
- Triton backend that enables pre-process, post-processing and other logic to be implemented in Python.☆673Apr 15, 2026Updated 3 weeks ago
- kerf is a tool designed to orchestrate and manage multiple kernel instances on a single host.☆26Jan 23, 2026Updated 3 months ago
- HunyuanDiT with TensorRT and libtorch☆18May 22, 2024Updated last year
- Fusing 2D Material World Knowledge on 3D Geometry☆53Mar 23, 2026Updated last month
- ☆18Dec 7, 2023Updated 2 years ago
- A unified library of SOTA model optimization techniques like quantization, pruning, distillation, speculative decoding, etc. It compresse…☆2,591Updated this week
- Disaggregated serving system for Large Language Models (LLMs).☆807Apr 6, 2025Updated last year
- custom payload for send nvdsanalytics message to kafka☆22Nov 16, 2022Updated 3 years ago
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- 🔶 Compressed bitvector/container supporting efficient random access and rank queries☆46Sep 4, 2024Updated last year
- 天池 NVIDIA TensorRT Hackathon 2023 —— 生成式AI模型优化赛 初赛第三名方案☆50Aug 16, 2023Updated 2 years ago
- The Triton backend for TensorFlow.☆56Nov 22, 2025Updated 5 months ago
- A simple tool that can generate TensorRT plugin code quickly.☆240Jul 11, 2023Updated 2 years ago
- The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.☆140Apr 8, 2026Updated 3 weeks ago
- 本仓库在OpenVINO推理框架下部署Nanodet检测算法,并重写预处理和后处理部分,具有超高性能!让你在Intel CPU平台上的检测速度起飞! 并基于NNCF和PPQ工具将模型量化(PTQ)至int8精度,推理速度更快!☆16Jun 14, 2023Updated 2 years ago
- Runtimex package help to expose Go Runtime internals representation safely.☆12Feb 19, 2025Updated last year
- The core library and APIs implementing the Triton Inference Server.☆170Apr 30, 2026Updated last week
- Use ESP32 & MCP over MQTT to build smart devices powered by AI.☆24Aug 25, 2025Updated 8 months ago
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- treelite runtime binding in Rust☆12Jun 12, 2025Updated 10 months ago
- ☆12Sep 1, 2023Updated 2 years ago
- ☆341Updated this week
- Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.☆221Feb 3, 2026Updated 3 months ago
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆119Mar 13, 2024Updated 2 years ago
- OpenVINO backend for Triton.☆37Apr 15, 2026Updated 3 weeks ago
- Slowdown prediction module of Echo: Simulating Distributed Training at Scale☆13May 17, 2025Updated 11 months ago