onnx / digestaiLinks
Digest AI is a powerful model analysis tool that extracts insights from your models.
☆37Updated 6 months ago
Alternatives and similar repositories for digestai
Users that are interested in digestai are comparing it to the libraries listed below
Sorting:
- Efficient in-memory representation for ONNX, in Python☆37Updated last week
- Home for OctoML PyTorch Profiler☆114Updated 2 years ago
- TORCH_LOGS parser for PT2☆70Updated this week
- MLIR-based partitioning system☆153Updated last week
- No-code CLI designed for accelerating ONNX workflows☆222Updated 6 months ago
- Ahead of Time (AOT) Triton Math Library☆84Updated 2 weeks ago
- TritonParse: A Compiler Tracer, Visualizer, and Reproducer for Triton Kernels☆179Updated this week
- AI Tensor Engine for ROCm☆327Updated this week
- MLPerf™ logging library☆37Updated last week
- Notes and artifacts from the ONNX steering committee☆27Updated last week
- ☆68Updated this week
- ☆41Updated last year
- Unified compiler/runtime for interfacing with PyTorch Dynamo.☆104Updated last week
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆104Updated 6 months ago
- AMD RAD's multi-GPU Triton-based framework for seamless multi-GPU programming☆143Updated this week
- OpenAI Triton backend for Intel® GPUs☆223Updated this week
- A fork of tvm/unity☆14Updated 2 years ago
- Perplexity open source garden for inference technology☆313Updated this week
- Tilus is a tile-level kernel programming language with explicit control over shared memory and registers.☆435Updated 2 weeks ago
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆63Updated 6 months ago
- Open Source Continuous Inference Benchmarking - GB200 NVL72 vs MI355X vs B200 vs H200 vs MI325X & soon™ TPUv6e/v7/Trainium2/3/GB300 NVL72…☆405Updated this week
- Fast low-bit matmul kernels in Triton☆413Updated last week
- Dev repo for power measurement for the MLPerf™ benchmarks☆26Updated 3 months ago
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆47Updated 4 months ago
- ☆68Updated 2 years ago
- Helpful kernel tutorials and examples for tile-based GPU programming☆501Updated this week
- A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM☆174Updated last week
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆123Updated last year
- Model compression for ONNX☆99Updated last year
- Ship correct and fast LLM kernels to PyTorch☆127Updated last week