IBM / onnx-mlir-serving
ONNX Serving is a project written with C++ to serve onnx-mlir compiled models with GRPC and other protocols.Benefiting from C++ implementation, ONNX Serving has very low latency overhead and high throughput. ONNX Servring provides dynamic batch aggregation and workers pool to fully utilize AI accelerators on the machine.
☆22Updated last year
Alternatives and similar repositories for onnx-mlir-serving:
Users that are interested in onnx-mlir-serving are comparing it to the libraries listed below
- ☆23Updated last year
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆39Updated 10 months ago
- ☆12Updated 5 years ago
- ☆49Updated last year
- A lightweight, Pythonic, frontend for MLIR☆80Updated last year
- Standalone Flash Attention v2 kernel without libtorch dependency☆105Updated 6 months ago
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆34Updated 2 years ago
- Unified compiler/runtime for interfacing with PyTorch Dynamo.☆99Updated 2 weeks ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing. By pro…☆67Updated this week
- Benchmark code for the "Online normalizer calculation for softmax" paper☆85Updated 6 years ago
- ☆69Updated 2 years ago
- A sandbox for quick iteration and experimentation on projects related to IREE, MLIR, and LLVM☆56Updated last month
- Experiments and prototypes associated with IREE or MLIR☆50Updated 7 months ago
- An easy way to run, test, benchmark and tune OpenCL kernel files☆23Updated last year
- Play with MLIR right in your browser☆131Updated last year
- Several optimization methods of half-precision general matrix vector multiplication (HGEMV) using CUDA core.☆57Updated 6 months ago
- An MLIR-based toy DL compiler for TVM Relay.☆57Updated 2 years ago
- This is a demo how to write a high performance convolution run on apple silicon☆54Updated 3 years ago
- how to design cpu gemm on x86 with avx256, that can beat openblas.☆68Updated 5 years ago
- play gemm with tvm☆89Updated last year
- MatMul Performance Benchmarks for a Single CPU Core comparing both hand engineered and codegen kernels.☆128Updated last year
- ☆10Updated last year
- Conversions to MLIR EmitC☆127Updated 3 months ago
- An experimental CPU backend for Triton☆99Updated this week
- MLIR-based partitioning system☆71Updated this week
- Benchmark scripts for TVM☆73Updated 2 years ago
- TPP experimentation on MLIR for linear algebra☆120Updated this week
- llm deploy project based onnx.☆31Updated 5 months ago
- An extension library of WMMA API (Tensor Core API)☆91Updated 8 months ago