onnx / digestai
Digest AI is a powerful model analysis tool that extracts insights from your models.
☆18Updated 3 weeks ago
Alternatives and similar repositories for digestai:
Users that are interested in digestai are comparing it to the libraries listed below
- An IR for efficiently simulating distributed ML computation.☆28Updated last year
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆40Updated 2 weeks ago
- Unified compiler/runtime for interfacing with PyTorch Dynamo.☆99Updated last month
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆74Updated this week
- A fork of tvm/unity☆14Updated last year
- TORCH_LOGS parser for PT2☆36Updated this week
- Visualize ONNX models with model-explorer☆31Updated 3 weeks ago
- Home for OctoML PyTorch Profiler☆108Updated last year
- ☆49Updated last year
- ☆26Updated last week
- The missing pieces (as far as boilerplate reduction goes) of the upstream MLIR python bindings.☆84Updated this week
- Explore training for quantized models☆17Updated 2 months ago
- Benchmarks to capture important workloads.☆30Updated 2 months ago
- MLIR-based partitioning system☆74Updated this week
- Framework to reduce autotune overhead to zero for well known deployments.☆63Updated 2 weeks ago
- Model compression for ONNX☆87Updated 4 months ago
- A lightweight, Pythonic, frontend for MLIR☆81Updated last year
- Standalone Flash Attention v2 kernel without libtorch dependency☆108Updated 6 months ago
- A tracing JIT for PyTorch☆17Updated 2 years ago
- ☆69Updated 2 years ago
- Conversions to MLIR EmitC☆128Updated 3 months ago
- ☆37Updated this week
- ☆23Updated last year
- Repository for ONNX SIG artifacts☆22Updated 3 weeks ago
- llama INT4 cuda inference with AWQ☆53Updated 2 months ago
- High-Performance SGEMM on CUDA devices☆87Updated 2 months ago
- ☆25Updated this week
- ☆23Updated last month
- ☆73Updated 4 months ago
- OpenAI Triton backend for Intel® GPUs☆172Updated this week