onnx / digestai
Digest AI is a powerful model analysis tool that extracts insights from your models.
☆21Updated 2 months ago
Alternatives and similar repositories for digestai
Users that are interested in digestai are comparing it to the libraries listed below
Sorting:
- Ahead of Time (AOT) Triton Math Library☆63Updated this week
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆42Updated 2 months ago
- ☆69Updated 2 years ago
- ☆33Updated this week
- ☆50Updated last year
- Visualize ONNX models with model-explorer☆33Updated 2 months ago
- A fork of tvm/unity☆14Updated last year
- Benchmarks to capture important workloads.☆31Updated 3 months ago
- Home for OctoML PyTorch Profiler☆113Updated 2 years ago
- A Python-embedded DSL that makes it easy to write fast, scalable ML kernels with minimal boilerplate.☆138Updated this week
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆109Updated 10 months ago
- ☆27Updated 4 months ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆85Updated this week
- An IR for efficiently simulating distributed ML computation.☆28Updated last year
- ☆24Updated last year
- AI Tensor Engine for ROCm☆195Updated this week
- MLIR-based partitioning system☆82Updated this week
- Model compression for ONNX☆92Updated 6 months ago
- Explore training for quantized models☆18Updated 4 months ago
- Fast low-bit matmul kernels in Triton☆301Updated this week
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆61Updated 2 months ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆108Updated 8 months ago
- Unified compiler/runtime for interfacing with PyTorch Dynamo.☆100Updated this week
- ☆69Updated last month
- A library for syntactically rewriting Python programs, pronounced (sinner).☆69Updated 3 years ago
- JaxPP is a library for JAX that enables flexible MPMD pipeline parallelism for large-scale LLM training☆43Updated this week
- ☆79Updated 6 months ago
- oneCCL Bindings for Pytorch*☆97Updated 3 weeks ago
- A schedule language for large model training☆147Updated 11 months ago
- Benchmark scripts for TVM☆74Updated 3 years ago