IBM / onnx-mlir-servingLinks
ONNX Serving is a project written with C++ to serve onnx-mlir compiled models with GRPC and other protocols.Benefiting from C++ implementation, ONNX Serving has very low latency overhead and high throughput. ONNX Servring provides dynamic batch aggregation and workers pool to fully utilize AI accelerators on the machine.
☆25Updated last month
Alternatives and similar repositories for onnx-mlir-serving
Users that are interested in onnx-mlir-serving are comparing it to the libraries listed below
Sorting:
- Unified compiler/runtime for interfacing with PyTorch Dynamo.☆102Updated 2 months ago
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆47Updated 2 months ago
- A lightweight, Pythonic, frontend for MLIR☆80Updated 2 years ago
- Play with MLIR right in your browser☆137Updated 2 years ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆112Updated last year
- ☆167Updated this week
- TORCH_LOGS parser for PT2☆62Updated last month
- MLIR-based partitioning system☆143Updated this week
- Efficient in-memory representation for ONNX, in Python☆30Updated this week
- ☆50Updated last year
- This is a demo how to write a high performance convolution run on apple silicon☆56Updated 3 years ago
- C++ implementations for various tokenizers (sentencepiece, tiktoken etc).☆39Updated this week
- Notes and artifacts from the ONNX steering committee☆26Updated last week
- ☆68Updated 2 years ago
- ☆24Updated last year
- Open source cross-platform compiler for compute-intensive loops used in AI algorithms, from Microsoft Research☆112Updated 2 years ago
- MatMul Performance Benchmarks for a Single CPU Core comparing both hand engineered and codegen kernels.☆134Updated 2 years ago
- Header-only safetensors loader and saver in C++☆69Updated 5 months ago
- Ahead of Time (AOT) Triton Math Library☆80Updated 2 weeks ago
- Common utilities for ONNX converters☆283Updated last month
- Python interface for MLIR - the Multi-Level Intermediate Representation☆268Updated 11 months ago
- Experiments and prototypes associated with IREE or MLIR☆55Updated last year
- An experimental CPU backend for Triton☆154Updated last week
- Benchmark code for the "Online normalizer calculation for softmax" paper☆102Updated 7 years ago
- A Python script to convert the output of NVIDIA Nsight Systems (in SQLite format) to JSON in Google Chrome Trace Event Format.☆41Updated 2 months ago
- llama INT4 cuda inference with AWQ☆55Updated 9 months ago
- ☆314Updated 3 months ago
- Nsight Compute In Docker☆12Updated last year
- Inference Vision Transformer (ViT) in plain C/C++ with ggml☆295Updated last year
- ☆422Updated this week