justinchuby / model-explorer-onnxLinks
Visualize ONNX models with model-explorer
☆64Updated last month
Alternatives and similar repositories for model-explorer-onnx
Users that are interested in model-explorer-onnx are comparing it to the libraries listed below
Sorting:
- ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.☆412Updated this week
- Common utilities for ONNX converters☆288Updated 3 months ago
- QONNX: Arbitrary-Precision Quantized Neural Networks in ONNX☆166Updated this week
- Model compression for ONNX☆99Updated last year
- PyTorch extension for emulating FP8 data formats on standard FP32 Xeon/GPU hardware.☆112Updated last year
- Use safetensors with ONNX 🤗☆76Updated 2 months ago
- Fast low-bit matmul kernels in Triton☆402Updated 2 weeks ago
- OpenAI Triton backend for Intel® GPUs☆221Updated last week
- ☆170Updated 3 weeks ago
- Inference Vision Transformer (ViT) in plain C/C++ with ggml☆300Updated last year
- Convert tflite to JSON and make it editable in the IDE. It also converts the edited JSON back to tflite binary.☆28Updated 2 years ago
- AI Edge Quantizer: flexible post training quantization for LiteRT models.☆81Updated 3 weeks ago
- Representation and Reference Lowering of ONNX Models in MLIR Compiler Infrastructure☆940Updated 2 weeks ago
- Open Neural Network Exchange to C compiler.☆338Updated last week
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆47Updated 3 months ago
- A safetensors extension to efficiently store sparse quantized tensors on disk☆214Updated this week
- ONNX Optimizer☆781Updated last month
- onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtime☆430Updated this week
- Development repository for the Triton language and compiler☆137Updated this week
- ☆166Updated 2 years ago
- Efficient in-memory representation for ONNX, in Python☆34Updated last week
- Unified compiler/runtime for interfacing with PyTorch Dynamo.☆104Updated last week
- ☆68Updated 2 years ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆123Updated last year
- CUDA Matrix Multiplication Optimization☆243Updated last year
- PyTorch emulation library for Microscaling (MX)-compatible data formats☆324Updated 5 months ago
- A Toolkit to Help Optimize Large Onnx Model☆162Updated last month
- ☆159Updated 2 years ago
- TritonParse: A Compiler Tracer, Visualizer, and Reproducer for Triton Kernels☆177Updated this week
- An open-source efficient deep learning framework/compiler, written in python.☆737Updated 3 months ago