IBM / onnx-mlir-servingLinks
ONNX Serving is a project written with C++ to serve onnx-mlir compiled models with GRPC and other protocols.Benefiting from C++ implementation, ONNX Serving has very low latency overhead and high throughput. ONNX Servring provides dynamic batch aggregation and workers pool to fully utilize AI accelerators on the machine.
☆24Updated 3 months ago
Alternatives and similar repositories for onnx-mlir-serving
Users that are interested in onnx-mlir-serving are comparing it to the libraries listed below
Sorting:
- A lightweight, Pythonic, frontend for MLIR☆80Updated last year
- ☆50Updated last year
- Play with MLIR right in your browser☆135Updated 2 years ago
- Experiments and prototypes associated with IREE or MLIR☆54Updated last year
- ☆24Updated last year
- ☆69Updated 2 years ago
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆34Updated 2 years ago
- This is a demo how to write a high performance convolution run on apple silicon☆54Updated 3 years ago
- Unified compiler/runtime for interfacing with PyTorch Dynamo.☆101Updated this week
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆44Updated 5 months ago
- ☆312Updated last month
- A sandbox for quick iteration and experimentation on projects related to IREE, MLIR, and LLVM☆60Updated 5 months ago
- TORCH_LOGS parser for PT2☆55Updated this week
- MLIR-based partitioning system☆120Updated last week
- ☆13Updated 5 years ago
- Conversions to MLIR EmitC☆132Updated 8 months ago
- MatMul Performance Benchmarks for a Single CPU Core comparing both hand engineered and codegen kernels.☆134Updated last year
- Python interface for MLIR - the Multi-Level Intermediate Representation☆264Updated 8 months ago
- A Python script to convert the output of NVIDIA Nsight Systems (in SQLite format) to JSON in Google Chrome Trace Event Format.☆38Updated 2 weeks ago
- Open source cross-platform compiler for compute-intensive loops used in AI algorithms, from Microsoft Research☆109Updated last year
- Standalone Flash Attention v2 kernel without libtorch dependency☆111Updated 11 months ago
- An IR for efficiently simulating distributed ML computation.☆29Updated last year
- ☆163Updated last week
- Benchmark scripts for TVM☆75Updated 3 years ago
- Notes and artifacts from the ONNX steering committee☆26Updated this week
- torch::deploy (multipy for non-torch uses) is a system that lets you get around the GIL problem by running multiple Python interpreters i…☆180Updated last month
- Common utilities for ONNX converters☆276Updated last month
- Benchmark code for the "Online normalizer calculation for softmax" paper☆98Updated 7 years ago
- SynapseAI Core is a reference implementation of the SynapseAI API running on Habana Gaudi☆42Updated 6 months ago
- Header-only safetensors loader and saver in C++☆66Updated 3 months ago