IBM / onnx-mlir-serving
ONNX Serving is a project written with C++ to serve onnx-mlir compiled models with GRPC and other protocols.Benefiting from C++ implementation, ONNX Serving has very low latency overhead and high throughput. ONNX Servring provides dynamic batch aggregation and workers pool to fully utilize AI accelerators on the machine.
☆23Updated this week
Alternatives and similar repositories for onnx-mlir-serving
Users that are interested in onnx-mlir-serving are comparing it to the libraries listed below
Sorting:
- ☆69Updated 2 years ago
- ☆50Updated last year
- Standalone Flash Attention v2 kernel without libtorch dependency☆108Updated 8 months ago
- An easy way to run, test, benchmark and tune OpenCL kernel files☆23Updated last year
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆40Updated last month
- A lightweight, Pythonic, frontend for MLIR☆81Updated last year
- Play with MLIR right in your browser☆135Updated last year
- Unified compiler/runtime for interfacing with PyTorch Dynamo.☆100Updated 2 months ago
- TORCH_LOGS parser for PT2☆37Updated 3 weeks ago
- ☆23Updated last year
- Benchmark code for the "Online normalizer calculation for softmax" paper☆91Updated 6 years ago
- Benchmark scripts for TVM☆74Updated 3 years ago
- This is a demo how to write a high performance convolution run on apple silicon☆54Updated 3 years ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆84Updated this week
- Conversions to MLIR EmitC☆128Updated 5 months ago
- how to design cpu gemm on x86 with avx256, that can beat openblas.☆70Updated 6 years ago
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆34Updated 2 years ago
- Ahead of Time (AOT) Triton Math Library☆63Updated this week
- MatMul Performance Benchmarks for a Single CPU Core comparing both hand engineered and codegen kernels.☆130Updated last year
- ☆9Updated 2 years ago
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆181Updated 3 months ago
- System for automated integration of deep learning backends.☆48Updated 2 years ago
- play gemm with tvm☆91Updated last year
- ☆79Updated 6 months ago
- Python interface for MLIR - the Multi-Level Intermediate Representation☆255Updated 5 months ago
- PET: Optimizing Tensor Programs with Partially Equivalent Transformations and Automated Corrections☆121Updated 2 years ago
- An unofficial cuda assembler, for all generations of SASS, hopefully :)☆83Updated 2 years ago
- A sandbox for quick iteration and experimentation on projects related to IREE, MLIR, and LLVM☆57Updated last month
- ☆13Updated 5 years ago
- ☆72Updated 4 months ago