IBM / onnx-mlir-servingLinks
ONNX Serving is a project written with C++ to serve onnx-mlir compiled models with GRPC and other protocols.Benefiting from C++ implementation, ONNX Serving has very low latency overhead and high throughput. ONNX Servring provides dynamic batch aggregation and workers pool to fully utilize AI accelerators on the machine.
☆24Updated 3 weeks ago
Alternatives and similar repositories for onnx-mlir-serving
Users that are interested in onnx-mlir-serving are comparing it to the libraries listed below
Sorting:
- A lightweight, Pythonic, frontend for MLIR☆81Updated last year
- ☆24Updated last year
- ☆50Updated last year
- Unified compiler/runtime for interfacing with PyTorch Dynamo.☆100Updated 2 weeks ago
- ☆69Updated 2 years ago
- Notes and artifacts from the ONNX steering committee☆26Updated this week
- MLIR-based partitioning system☆86Updated this week
- Standalone Flash Attention v2 kernel without libtorch dependency☆109Updated 8 months ago
- A sandbox for quick iteration and experimentation on projects related to IREE, MLIR, and LLVM☆57Updated 2 months ago
- ☆9Updated 2 years ago
- TORCH_LOGS parser for PT2☆38Updated last week
- MatMul Performance Benchmarks for a Single CPU Core comparing both hand engineered and codegen kernels.☆131Updated last year
- IREE's PyTorch Frontend, based on Torch Dynamo.☆85Updated this week
- ☆71Updated 2 months ago
- A tracing JIT compiler for PyTorch☆13Updated 3 years ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆88Updated last week
- This is a demo how to write a high performance convolution run on apple silicon☆54Updated 3 years ago
- FlagTree is a unified compiler for multiple AI chips, which is forked from triton-lang/triton.☆24Updated this week
- Benchmark code for the "Online normalizer calculation for softmax" paper☆94Updated 6 years ago
- TPP experimentation on MLIR for linear algebra☆131Updated this week
- Ahead of Time (AOT) Triton Math Library☆64Updated last week
- ☆36Updated this week
- The missing pieces (as far as boilerplate reduction goes) of the upstream MLIR python bindings.☆99Updated last week
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆43Updated 2 months ago
- An easy way to run, test, benchmark and tune OpenCL kernel files☆23Updated last year
- A tracing JIT for PyTorch☆17Updated 2 years ago
- Open source cross-platform compiler for compute-intensive loops used in AI algorithms, from Microsoft Research☆109Updated last year
- A language and compiler for irregular tensor programs.☆138Updated 6 months ago
- MLIRX is now defunct. Please see PolyBlocks - https://docs.polymagelabs.com☆38Updated last year
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆109Updated 10 months ago