justinchuby / onnx-safetensorsLinks
Use safetensors with ONNX π€
β69Updated last week
Alternatives and similar repositories for onnx-safetensors
Users that are interested in onnx-safetensors are comparing it to the libraries listed below
Sorting:
- Model compression for ONNXβ97Updated 10 months ago
- No-code CLI designed for accelerating ONNX workflowsβ214Updated 3 months ago
- ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.β400Updated last week
- Inference Vision Transformer (ViT) in plain C/C++ with ggmlβ296Updated last year
- Python bindings for ggmlβ146Updated last year
- Thin wrapper around GGML to make life easierβ39Updated 3 months ago
- onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtimeβ414Updated last week
- Visualize ONNX models with model-explorerβ45Updated this week
- A Toolkit to Help Optimize Onnx Modelβ220Updated last week
- AI Edge Quantizer: flexible post training quantization for LiteRT models.β69Updated this week
- A safetensors extension to efficiently store sparse quantized tensors on diskβ167Updated this week
- AMD related optimizations for transformer modelsβ90Updated last month
- PyTorch half precision gemm lib w/ fused optional bias + optional relu/geluβ74Updated 10 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMsβ265Updated 11 months ago
- Common utilities for ONNX convertersβ282Updated last month
- π€ Optimum ONNX: Export your model to ONNX and run inference with ONNX Runtimeβ53Updated this week
- An innovative library for efficient LLM inference via low-bit quantizationβ350Updated last year
- π€ Optimum Intel: Accelerate inference with Intel optimization toolsβ498Updated last week
- π€ Optimum ExecuTorchβ67Updated this week
- β69Updated 2 years ago
- OpenVINO Tokenizers extensionβ42Updated last week
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMsβ90Updated this week
- β165Updated last week
- β76Updated 9 months ago
- Notes and artifacts from the ONNX steering committeeβ26Updated last week
- A minimalistic C++ Jinja templating engine for LLM chat templatesβ187Updated 2 weeks ago
- The Triton backend for the ONNX Runtime.β162Updated this week
- A general 2-8 bits quantization toolbox with GPTQ/AWQ/HQQ/VPTQ, and export to onnx/onnx-runtime easily.β180Updated 6 months ago
- Triton CLI is an open source command line interface that enables users to create, deploy, and profile models served by the Triton Inferenβ¦β68Updated last month
- β17Updated 10 months ago