IBM / onnx-mlir-servingLinks
ONNX Serving is a project written with C++ to serve onnx-mlir compiled models with GRPC and other protocols.Benefiting from C++ implementation, ONNX Serving has very low latency overhead and high throughput. ONNX Servring provides dynamic batch aggregation and workers pool to fully utilize AI accelerators on the machine.
☆24Updated last month
Alternatives and similar repositories for onnx-mlir-serving
Users that are interested in onnx-mlir-serving are comparing it to the libraries listed below
Sorting:
- ☆69Updated 2 years ago
- A lightweight, Pythonic, frontend for MLIR☆81Updated last year
- MLIR-based partitioning system☆97Updated this week
- ☆50Updated last year
- Unified compiler/runtime for interfacing with PyTorch Dynamo.☆100Updated last month
- Yet another Polyhedra Compiler for DeepLearning☆19Updated 2 years ago
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆43Updated 3 months ago
- ☆24Updated last year
- An IR for efficiently simulating distributed ML computation.☆28Updated last year
- Benchmark code for the "Online normalizer calculation for softmax" paper☆94Updated 6 years ago
- Benchmarks to capture important workloads.☆31Updated 4 months ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆110Updated 9 months ago
- Play with MLIR right in your browser☆135Updated 2 years ago
- Fairring (FAIR + Herring) is a plug-in for PyTorch that provides a process group for distributed training that outperforms NCCL at large …☆65Updated 3 years ago
- TORCH_LOGS parser for PT2☆43Updated 3 weeks ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆109Updated 11 months ago
- Benchmark scripts for TVM☆74Updated 3 years ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆90Updated 2 weeks ago
- MatMul Performance Benchmarks for a Single CPU Core comparing both hand engineered and codegen kernels.☆133Updated last year
- Conversions to MLIR EmitC☆129Updated 6 months ago
- ☆72Updated 3 months ago
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆34Updated 2 years ago
- A tracing JIT for PyTorch☆17Updated 2 years ago
- A sandbox for quick iteration and experimentation on projects related to IREE, MLIR, and LLVM☆58Updated 3 months ago
- Explore training for quantized models☆18Updated this week
- A tracing JIT compiler for PyTorch☆13Updated 3 years ago
- System for automated integration of deep learning backends.☆47Updated 2 years ago
- MLIRX is now defunct. Please see PolyBlocks - https://docs.polymagelabs.com☆38Updated last year
- An easy way to run, test, benchmark and tune OpenCL kernel files☆23Updated last year
- torch::deploy (multipy for non-torch uses) is a system that lets you get around the GIL problem by running multiple Python interpreters i…☆180Updated 2 weeks ago