justinchuby / onnx-safetensorsLinks
Use safetensors with ONNX 🤗
☆61Updated 3 months ago
Alternatives and similar repositories for onnx-safetensors
Users that are interested in onnx-safetensors are comparing it to the libraries listed below
Sorting:
- Model compression for ONNX☆96Updated 6 months ago
- Visualize ONNX models with model-explorer☆34Updated 2 weeks ago
- Common utilities for ONNX converters☆270Updated 6 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆263Updated 7 months ago
- Python bindings for ggml☆141Updated 9 months ago
- Thin wrapper around GGML to make life easier☆34Updated this week
- onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtime☆390Updated this week
- The Triton backend for the ONNX Runtime.☆148Updated 3 weeks ago
- OpenVINO Tokenizers extension☆34Updated last week
- ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.☆356Updated this week
- A safetensors extension to efficiently store sparse quantized tensors on disk☆117Updated this week
- A Toolkit to Help Optimize Onnx Model☆153Updated this week
- Common source, scripts and utilities shared across all Triton repositories.☆72Updated 3 weeks ago
- TensorRT-LLM server with Structured Outputs (JSON) built with Rust☆54Updated last month
- Profile your CoreML models directly from Python 🐍☆27Updated 7 months ago
- Experiments with BitNet inference on CPU☆55Updated last year
- Google TPU optimizations for transformers models☆112Updated 4 months ago
- A general 2-8 bits quantization toolbox with GPTQ/AWQ/HQQ/VPTQ, and export to onnx/onnx-runtime easily.☆172Updated 2 months ago
- No-code CLI designed for accelerating ONNX workflows☆192Updated 2 weeks ago
- AI Edge Quantizer: flexible post training quantization for LiteRT models.☆41Updated last week
- ☆215Updated this week
- ☆71Updated 2 months ago
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆467Updated this week
- cortex.llamacpp is a high-efficiency C++ inference engine for edge computing. It is a dynamic library that can be loaded by any server a…☆40Updated this week
- Inference server benchmarking tool☆67Updated last month
- Module, Model, and Tensor Serialization/Deserialization☆234Updated last week
- Rust crate for some audio utilities☆23Updated 2 months ago
- Fast low-bit matmul kernels in Triton☆311Updated this week
- An innovative library for efficient LLM inference via low-bit quantization☆348Updated 9 months ago
- High-Performance SGEMM on CUDA devices☆94Updated 4 months ago