IBM / onnx-mlir-servingLinks
ONNX Serving is a project written with C++ to serve onnx-mlir compiled models with GRPC and other protocols.Benefiting from C++ implementation, ONNX Serving has very low latency overhead and high throughput. ONNX Servring provides dynamic batch aggregation and workers pool to fully utilize AI accelerators on the machine.
☆25Updated 2 months ago
Alternatives and similar repositories for onnx-mlir-serving
Users that are interested in onnx-mlir-serving are comparing it to the libraries listed below
Sorting:
- Unified compiler/runtime for interfacing with PyTorch Dynamo.☆103Updated 3 months ago
- C++ implementations for various tokenizers (sentencepiece, tiktoken etc).☆40Updated this week
- Play with MLIR right in your browser☆138Updated 2 years ago
- Notes and artifacts from the ONNX steering committee☆27Updated 2 weeks ago
- A lightweight, Pythonic, frontend for MLIR☆80Updated 2 years ago
- ☆24Updated last year
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆47Updated 3 months ago
- TORCH_LOGS parser for PT2☆64Updated last week
- torch::deploy (multipy for non-torch uses) is a system that lets you get around the GIL problem by running multiple Python interpreters i…☆181Updated 2 months ago
- The Triton backend for the ONNX Runtime.☆167Updated last week
- ☆169Updated last week
- ☆68Updated 2 years ago
- ☆50Updated last year
- MLIR-based partitioning system☆148Updated this week
- Common utilities for ONNX converters☆284Updated 2 months ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆112Updated last year
- ☆422Updated this week
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆121Updated last year
- Ahead of Time (AOT) Triton Math Library☆83Updated last week
- A Fusion Code Generator for NVIDIA GPUs (commonly known as "nvFuser")☆362Updated this week
- ☆315Updated 4 months ago
- OpenAI Triton backend for Intel® GPUs☆219Updated this week
- IREE's PyTorch Frontend, based on Torch Dynamo.☆101Updated this week
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆33Updated 2 years ago
- ☆126Updated last week
- Efficient in-memory representation for ONNX, in Python☆32Updated last week
- MatMul Performance Benchmarks for a Single CPU Core comparing both hand engineered and codegen kernels.☆135Updated 2 years ago
- Model compression for ONNX☆98Updated last year
- Open source cross-platform compiler for compute-intensive loops used in AI algorithms, from Microsoft Research☆112Updated 2 years ago
- Benchmark scripts for TVM☆74Updated 3 years ago