IBM / onnx-mlir-servingLinks
ONNX Serving is a project written with C++ to serve onnx-mlir compiled models with GRPC and other protocols.Benefiting from C++ implementation, ONNX Serving has very low latency overhead and high throughput. ONNX Servring provides dynamic batch aggregation and workers pool to fully utilize AI accelerators on the machine.
☆24Updated 2 months ago
Alternatives and similar repositories for onnx-mlir-serving
Users that are interested in onnx-mlir-serving are comparing it to the libraries listed below
Sorting:
- ☆24Updated last year
- A lightweight, Pythonic, frontend for MLIR☆80Updated last year
- ☆69Updated 2 years ago
- ☆50Updated last year
- An easy way to run, test, benchmark and tune OpenCL kernel files☆23Updated last year
- Unified compiler/runtime for interfacing with PyTorch Dynamo.☆100Updated last week
- Notes and artifacts from the ONNX steering committee☆26Updated last week
- Play with MLIR right in your browser☆135Updated 2 years ago
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆34Updated 2 years ago
- MatMul Performance Benchmarks for a Single CPU Core comparing both hand engineered and codegen kernels.☆133Updated last year
- how to design cpu gemm on x86 with avx256, that can beat openblas.☆70Updated 6 years ago
- A tracing JIT for PyTorch☆17Updated 2 years ago
- torch::deploy (multipy for non-torch uses) is a system that lets you get around the GIL problem by running multiple Python interpreters i…☆180Updated last week
- Yet another Polyhedra Compiler for DeepLearning☆19Updated 2 years ago
- C++ implementations for various tokenizers (sentencepiece, tiktoken etc).☆32Updated this week
- ☆9Updated 2 years ago
- MLIR-based partitioning system☆103Updated this week
- This is a demo how to write a high performance convolution run on apple silicon☆54Updated 3 years ago
- Open source cross-platform compiler for compute-intensive loops used in AI algorithms, from Microsoft Research☆109Updated last year
- ☆310Updated 6 months ago
- A sandbox for quick iteration and experimentation on projects related to IREE, MLIR, and LLVM☆59Updated 3 months ago
- ONNX Command-Line Toolbox☆35Updated 9 months ago
- Model compression for ONNX☆96Updated 7 months ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆110Updated 10 months ago
- Benchmark scripts for TVM☆74Updated 3 years ago
- IREE's PyTorch Frontend, based on Torch Dynamo.☆90Updated this week
- ☆73Updated 3 months ago
- The Triton backend for the ONNX Runtime.☆155Updated last week
- Common utilities for ONNX converters☆274Updated 2 weeks ago
- Static analysis framework for analyzing programs written in TVM's Relay IR.☆28Updated 5 years ago