IBM / onnx-mlir-servingLinks
ONNX Serving is a project written with C++ to serve onnx-mlir compiled models with GRPC and other protocols.Benefiting from C++ implementation, ONNX Serving has very low latency overhead and high throughput. ONNX Servring provides dynamic batch aggregation and workers pool to fully utilize AI accelerators on the machine.
☆25Updated 4 months ago
Alternatives and similar repositories for onnx-mlir-serving
Users that are interested in onnx-mlir-serving are comparing it to the libraries listed below
Sorting:
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆49Updated 5 months ago
- A lightweight, Pythonic, frontend for MLIR☆80Updated 2 years ago
- Unified compiler/runtime for interfacing with PyTorch Dynamo.☆104Updated last month
- Open source cross-platform compiler for compute-intensive loops used in AI algorithms, from Microsoft Research☆116Updated 2 years ago
- Notes and artifacts from the ONNX steering committee☆28Updated last week
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆35Updated 3 years ago
- Play with MLIR right in your browser☆138Updated 2 years ago
- Experiments and prototypes associated with IREE or MLIR☆56Updated last year
- TORCH_TRACE parser for PT2☆76Updated last week
- ☆172Updated this week
- C++ implementations for various tokenizers (sentencepiece, tiktoken etc).☆48Updated last week
- This is a demo how to write a high performance convolution run on apple silicon☆57Updated 4 years ago
- The missing pieces (as far as boilerplate reduction goes) of the upstream MLIR python bindings.☆117Updated 3 months ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆114Updated last year
- ☆24Updated last year
- MLIR-based partitioning system☆164Updated this week
- ☆49Updated last year
- ☆68Updated 2 years ago
- ☆322Updated this week
- ☆71Updated 10 months ago
- TritonParse: A Compiler Tracer, Visualizer, and Reproducer for Triton Kernels☆194Updated last week
- IREE's PyTorch Frontend, based on Torch Dynamo.☆105Updated this week
- Common utilities for ONNX converters☆294Updated last month
- Ahead of Time (AOT) Triton Math Library☆88Updated this week
- Python interface for MLIR - the Multi-Level Intermediate Representation☆272Updated last year
- MLIRX is now defunct. Please see PolyBlocks - https://docs.polymagelabs.com☆38Updated 2 years ago
- ☆137Updated last week
- Intel® Extension for MLIR. A staging ground for MLIR dialects and tools for Intel devices using the MLIR toolchain.☆147Updated last week
- MatMul Performance Benchmarks for a Single CPU Core comparing both hand engineered and codegen kernels.☆138Updated 2 years ago
- Conversions to MLIR EmitC☆134Updated last year