IBM / onnx-mlir-serving
ONNX Serving is a project written with C++ to serve onnx-mlir compiled models with GRPC and other protocols.Benefiting from C++ implementation, ONNX Serving has very low latency overhead and high throughput. ONNX Servring provides dynamic batch aggregation and workers pool to fully utilize AI accelerators on the machine.
☆23Updated last year
Related projects ⓘ
Alternatives and complementary repositories for onnx-mlir-serving
- ☆67Updated last year
- ☆23Updated 8 months ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆98Updated 2 months ago
- TVM stack: exploring the incredible explosion of deep-learning frameworks and how to bring them together☆63Updated 6 years ago
- ☆48Updated 8 months ago
- how to design cpu gemm on x86 with avx256, that can beat openblas.☆65Updated 5 years ago
- ☆22Updated 4 years ago
- Benchmark scripts for TVM☆73Updated 2 years ago
- A sandbox for quick iteration and experimentation on projects related to IREE, MLIR, and LLVM☆54Updated 2 months ago
- MatMul Performance Benchmarks for a Single CPU Core comparing both hand engineered and codegen kernels.☆123Updated last year
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆85Updated 8 months ago
- play gemm with tvm☆84Updated last year
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆33Updated last year
- ☆12Updated 4 years ago
- llama INT4 cuda inference with AWQ☆47Updated 4 months ago
- TensorFlow and TVM integration☆38Updated 4 years ago
- Play with MLIR right in your browser☆124Updated last year
- A lightweight, Pythonic, frontend for MLIR☆80Updated last year
- Unified compiler/runtime for interfacing with PyTorch Dynamo.☆95Updated this week
- Fork of https://source.codeaurora.org/quic/hexagon_nn/nnlib☆54Updated last year
- This is a demo how to write a high performance convolution run on apple silicon☆52Updated 2 years ago
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆35Updated 6 months ago
- Benchmark code for the "Online normalizer calculation for softmax" paper☆59Updated 6 years ago
- Tencent Distribution of TVM☆15Updated last year
- ☆11Updated 3 years ago
- Yet another Polyhedra Compiler for DeepLearning☆19Updated last year
- An extension library of WMMA API (Tensor Core API)☆83Updated 4 months ago
- ☆55Updated 5 months ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆87Updated 4 months ago