justinchuby / model-explorer-onnxLinks
Visualize ONNX models with model-explorer
☆66Updated 2 weeks ago
Alternatives and similar repositories for model-explorer-onnx
Users that are interested in model-explorer-onnx are comparing it to the libraries listed below
Sorting:
- ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.☆414Updated this week
- Model compression for ONNX☆99Updated last year
- Use safetensors with ONNX 🤗☆78Updated 2 months ago
- Efficient in-memory representation for ONNX, in Python☆37Updated this week
- Common utilities for ONNX converters☆289Updated last week
- AI Edge Quantizer: flexible post training quantization for LiteRT models.☆84Updated this week
- A stand-alone implementation of several NumPy dtype extensions used in machine learning.☆320Updated this week
- QONNX: Arbitrary-Precision Quantized Neural Networks in ONNX☆168Updated this week
- Fast low-bit matmul kernels in Triton☆413Updated last week
- Inference Vision Transformer (ViT) in plain C/C++ with ggml☆302Updated last year
- ☆21Updated 9 months ago
- onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtime☆431Updated last week
- 🤗 Optimum ExecuTorch☆93Updated this week
- This repository contains the experimental PyTorch native float8 training UX☆227Updated last year
- A safetensors extension to efficiently store sparse quantized tensors on disk☆225Updated this week
- The Triton backend for the ONNX Runtime.☆170Updated 2 weeks ago
- The Triton backend for the PyTorch TorchScript models.☆168Updated last week
- Convert tflite to JSON and make it editable in the IDE. It also converts the edited JSON back to tflite binary.☆28Updated 2 years ago
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆47Updated 4 months ago
- An open-source efficient deep learning framework/compiler, written in python.☆737Updated 3 months ago
- TORCH_LOGS parser for PT2☆70Updated last month
- Python bindings for ggml☆146Updated last year
- OpenAI Triton backend for Intel® GPUs☆222Updated this week
- High-Performance SGEMM on CUDA devices☆114Updated 11 months ago
- ☆340Updated 3 weeks ago
- TritonParse: A Compiler Tracer, Visualizer, and Reproducer for Triton Kernels☆179Updated this week
- A user-friendly tool chain that enables the seamless execution of ONNX models using JAX as the backend.☆125Updated last week
- ☆171Updated 2 weeks ago
- Inference Llama 2 with a model compiled to native code by TorchInductor☆14Updated last year
- A Toolkit to Help Optimize Large Onnx Model☆162Updated 2 months ago