onnx / digestaiLinks
Digest AI is a powerful model analysis tool that extracts insights from your models.
☆31Updated 3 months ago
Alternatives and similar repositories for digestai
Users that are interested in digestai are comparing it to the libraries listed below
Sorting:
- Home for OctoML PyTorch Profiler☆114Updated 2 years ago
- Ahead of Time (AOT) Triton Math Library☆76Updated this week
- LeetGPU Challenges☆70Updated this week
- Machine Learning Agility (MLAgility) benchmark and benchmarking tools☆39Updated last month
- TORCH_LOGS parser for PT2☆60Updated last week
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆62Updated 2 months ago
- No-code CLI designed for accelerating ONNX workflows☆214Updated 3 months ago
- AI Tensor Engine for ROCm☆276Updated this week
- ☆56Updated this week
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆45Updated last month
- MLIR-based partitioning system☆132Updated this week
- MLPerf™ logging library☆37Updated last week
- OpenAI Triton backend for Intel® GPUs☆207Updated this week
- Efficient in-memory representation for ONNX, in Python☆26Updated this week
- ☆42Updated this week
- TritonParse: A Compiler Tracer, Visualizer, and mini-Reproducer Generator(WIP) for Triton Kernels☆150Updated last week
- Dev repo for power measurement for the MLPerf™ benchmarks☆24Updated last week
- Development repository for the Triton language and compiler☆131Updated this week
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆97Updated 2 months ago
- Benchmarks to capture important workloads.☆31Updated 7 months ago
- oneCCL Bindings for Pytorch*☆102Updated last month
- extensible collectives library in triton☆87Updated 5 months ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆114Updated last year
- Write a fast kernel and run it on Discord. See how you compare against the best!☆57Updated this week
- An IR for efficiently simulating distributed ML computation.☆29Updated last year
- ☆69Updated 2 years ago
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆239Updated this week
- A fork of tvm/unity☆14Updated 2 years ago
- ☆44Updated this week
- Issues related to MLPerf™ Inference policies, including rules and suggested changes☆64Updated last week