justinchuby / onnx-safetensorsLinks
Use safetensors with ONNX π€
β63Updated 3 months ago
Alternatives and similar repositories for onnx-safetensors
Users that are interested in onnx-safetensors are comparing it to the libraries listed below
Sorting:
- Model compression for ONNXβ96Updated 7 months ago
- Visualize ONNX models with model-explorerβ36Updated last month
- A Toolkit to Help Optimize Onnx Modelβ159Updated this week
- Python bindings for ggmlβ141Updated 9 months ago
- The Triton backend for the ONNX Runtime.β153Updated last week
- A safetensors extension to efficiently store sparse quantized tensors on diskβ129Updated this week
- Thin wrapper around GGML to make life easierβ35Updated 3 weeks ago
- Common utilities for ONNX convertersβ272Updated 6 months ago
- GGUF parser in Pythonβ28Updated 10 months ago
- A general 2-8 bits quantization toolbox with GPTQ/AWQ/HQQ/VPTQ, and export to onnx/onnx-runtime easily.β172Updated 2 months ago
- Triton CLI is an open source command line interface that enables users to create, deploy, and profile models served by the Triton Inferenβ¦β64Updated 2 weeks ago
- A high-throughput and memory-efficient inference and serving engine for LLMsβ264Updated 8 months ago
- Common source, scripts and utilities shared across all Triton repositories.β74Updated last week
- AI Edge Quantizer: flexible post training quantization for LiteRT models.β49Updated this week
- π€ Optimum ExecuTorchβ53Updated this week
- OpenVINO Tokenizers extensionβ36Updated last week
- Inference Vision Transformer (ViT) in plain C/C++ with ggmlβ288Updated last year
- Experiments with BitNet inference on CPUβ54Updated last year
- AMD related optimizations for transformer modelsβ79Updated 7 months ago
- onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtimeβ394Updated this week
- An innovative library for efficient LLM inference via low-bit quantizationβ349Updated 9 months ago
- High-Performance SGEMM on CUDA devicesβ95Updated 5 months ago
- No-code CLI designed for accelerating ONNX workflowsβ198Updated 2 weeks ago
- Fast low-bit matmul kernels in Tritonβ322Updated last week
- ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.β360Updated this week
- Course Project for COMP4471 on RWKVβ17Updated last year
- The Triton backend for TensorRT.β77Updated last week
- Notes and artifacts from the ONNX steering committeeβ26Updated 2 weeks ago
- π· Build compute kernelsβ68Updated this week
- Load compute kernels from the Hubβ191Updated this week