ONNX Runtime Server: The ONNX Runtime Server is a server that provides TCP and HTTP/HTTPS REST APIs for ONNX inference.
☆185Apr 11, 2026Updated this week
Alternatives and similar repositories for onnxruntime-server
Users that are interested in onnxruntime-server are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Provide accurate offline voice-to-text services for VR,AR and Android platforms, such as oculus quest1/2/pro or pico3/4☆26May 21, 2024Updated last year
- An ASR toolkit with the freedom of topology☆10Dec 18, 2023Updated 2 years ago
- ☆10Jul 18, 2024Updated last year
- ONNX Serving is a project written with C++ to serve onnx-mlir compiled models with GRPC and other protocols.Benefiting from C++ implement…☆26Sep 17, 2025Updated 6 months ago
- Decoders from Kaldi using OpenFst☆34Updated this week
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- RKNN-YOLOV5-BatchInference-MultiThreadingYOLOV5多张图片多线程C++推理☆22Nov 6, 2023Updated 2 years ago
- ffmpeg+cuvid+tensorrt+multicamera☆12Dec 31, 2024Updated last year
- Simple, High-speed inferencing for YOLOv11 with ONNXRuntime☆17Nov 4, 2024Updated last year
- ☆33Jul 23, 2024Updated last year
- Large Language Model Onnx Inference Framework☆35Nov 25, 2025Updated 4 months ago
- ☕️ A vscode extension for netron, support *.pdmodel, *.nb, *.onnx, *.pb, *.h5, *.tflite, *.pth, *.pt, *.mnn, *.param, etc.☆14Jun 4, 2023Updated 2 years ago
- llm deploy project based onnx.☆50Oct 9, 2024Updated last year
- onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtime☆456Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆17Jun 3, 2024Updated last year
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- RKNN模型推理部署模板☆24Aug 11, 2023Updated 2 years ago
- 使用ONNXRuntime部署Informative-Drawings生成素描画,包含C++和Python两个版本的程序☆14Sep 7, 2023Updated 2 years ago
- Colab notebooks for Next-gen Kaldi☆31Oct 12, 2025Updated 6 months ago
- TensorRT depth-anything for anyone and anywhere☆15Jan 29, 2024Updated 2 years ago
- Model compression for ONNX☆100Mar 1, 2026Updated last month
- Provides an ensemble model to deploy a YoloV8 ONNX model to Triton☆42Oct 19, 2023Updated 2 years ago
- Automatic Speech Recognition at the University of Edinburgh.☆16Mar 14, 2021Updated 5 years ago
- ONNX Runtime tiny wrapper for openFrameworks☆15Jan 21, 2022Updated 4 years ago
- Sample projects for InferenceHelper, a Helper Class for Deep Learning Inference Frameworks: TensorFlow Lite, TensorRT, OpenCV, ncnn, MNN,…☆22Mar 27, 2022Updated 4 years ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- Python wrapper class for OpenVINO Model Server. User can submit inference request to OVMS with just a few lines of code.☆10Jan 16, 2022Updated 4 years ago
- ☆17Jan 1, 2024Updated 2 years ago
- Uses the excellent silero VAD with onnxruntime C api for fast detection of audio segments with speech☆16Sep 20, 2024Updated last year
- Inference Llama/Llama2/Llama3 Modes in NumPy☆21Nov 22, 2023Updated 2 years ago
- 大模型API性能指标比较 - 深入分析TTFT、TPS等关键指标☆20Sep 12, 2024Updated last year
- stable diffusion using mnn☆67Sep 28, 2023Updated 2 years ago
- Serving Inside Pytorch☆171Feb 3, 2026Updated 2 months ago
- Dart plugin wrapping the Sherpa-ONNX runtime. Contains example for speech recognition with Flutter☆22Jan 3, 2025Updated last year
- ONNX-compatible DocShadow: High-Resolution Document Shadow Removal. Supports TensorRT 🚀☆25Sep 13, 2023Updated 2 years ago
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Quantize yolov5 using pytorch_quantization.🚀🚀🚀☆14Oct 24, 2023Updated 2 years ago
- YoloV10 for a bare Raspberry Pi 4 or 5☆23Jun 21, 2024Updated last year
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆20Aug 3, 2025Updated 8 months ago
- onnxruntime pre-compiled libs☆177Apr 9, 2026Updated last week
- Explore LLM model deployment based on AXera's AI chips☆148Apr 1, 2026Updated 2 weeks ago
- Libriheavy: a 50,000 hours ASR corpus with punctuation casing and context☆217Sep 10, 2024Updated last year
- The Triton backend for TensorRT.☆88Apr 8, 2026Updated last week