justinchuby / onnx-safetensorsLinks
Use safetensors with ONNX π€
β69Updated last month
Alternatives and similar repositories for onnx-safetensors
Users that are interested in onnx-safetensors are comparing it to the libraries listed below
Sorting:
- Model compression for ONNXβ97Updated 9 months ago
- Python bindings for ggmlβ146Updated 11 months ago
- Thin wrapper around GGML to make life easierβ40Updated 2 months ago
- Inference Vision Transformer (ViT) in plain C/C++ with ggmlβ293Updated last year
- ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.β376Updated last week
- No-code CLI designed for accelerating ONNX workflowsβ208Updated 2 months ago
- onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtimeβ409Updated 2 weeks ago
- A Toolkit to Help Optimize Onnx Modelβ198Updated this week
- A safetensors extension to efficiently store sparse quantized tensors on diskβ153Updated last week
- Common utilities for ONNX convertersβ277Updated this week
- AMD related optimizations for transformer modelsβ83Updated last week
- AI Edge Quantizer: flexible post training quantization for LiteRT models.β60Updated last week
- π· Build compute kernelsβ119Updated this week
- β74Updated 8 months ago
- π€ Optimum Intel: Accelerate inference with Intel optimization toolsβ485Updated last week
- A high-throughput and memory-efficient inference and serving engine for LLMsβ266Updated 10 months ago
- TTS support with GGMLβ160Updated this week
- An innovative library for efficient LLM inference via low-bit quantizationβ349Updated 11 months ago
- Visualize ONNX models with model-explorerβ39Updated 3 months ago
- π€ Optimum ONNX: Export your model to ONNX and run inference with ONNX Runtimeβ36Updated this week
- a simple Flash Attention v2 implementation with ROCM (RDNA3 GPU, roc wmma), mainly used for stable diffusion(ComfyUI) in Windows ZLUDA enβ¦β47Updated last year
- π€ Optimum ExecuTorchβ64Updated this week
- The Triton backend for the ONNX Runtime.β159Updated 3 weeks ago
- Notes and artifacts from the ONNX steering committeeβ26Updated this week
- cortex.llamacpp is a high-efficiency C++ inference engine for edge computing. It is a dynamic library that can be loaded by any server aβ¦β42Updated last month
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMsβ88Updated this week
- A general 2-8 bits quantization toolbox with GPTQ/AWQ/HQQ/VPTQ, and export to onnx/onnx-runtime easily.β177Updated 4 months ago
- OpenVINO Tokenizers extensionβ40Updated this week
- Experiments with BitNet inference on CPUβ54Updated last year
- python package of rocm-smi-libβ22Updated last month