microsoft / onnxruntime-genai
Generative AI extensions for onnxruntime
☆693Updated this week
Alternatives and similar repositories for onnxruntime-genai:
Users that are interested in onnxruntime-genai are comparing it to the libraries listed below
- Olive: Simplify ML Model Finetuning, Conversion, Quantization, and Optimization for CPUs, GPUs and NPUs.☆1,869Updated this week
- onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtime☆375Updated this week
- An innovative library for efficient LLM inference via low-bit quantization☆350Updated 7 months ago
- Examples for using ONNX Runtime for model training.☆332Updated 6 months ago
- ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.☆341Updated this week
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆260Updated this week
- Advanced Quantization Algorithm for LLMs/VLMs.☆438Updated this week
- Low-bit LLM inference on CPU with lookup table☆735Updated 3 months ago
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆460Updated this week
- ☆1,025Updated last year
- Universal cross-platform tokenizers binding to HF and sentencepiece☆323Updated last week
- The Qualcomm® AI Hub Models are a collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.)…☆672Updated 2 weeks ago
- LLaMa/RWKV onnx models, quantization and testcase☆361Updated last year
- Intel® NPU Acceleration Library☆667Updated 3 months ago
- Official implementation of Half-Quadratic Quantization (HQQ)☆791Updated this week
- Common utilities for ONNX converters☆267Updated 4 months ago
- nvidia-modelopt is a unified library of state-of-the-art model optimization techniques like quantization, pruning, distillation, speculat…☆870Updated last week
- Production ready LLM model compression/quantization toolkit with hw accelerated inference support for both cpu/gpu via HF, vLLM, and SGLa…☆481Updated this week
- A pytorch quantization backend for optimum☆922Updated last week
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆1,251Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆262Updated 6 months ago
- SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX R…☆2,380Updated this week
- The Triton TensorRT-LLM Backend☆827Updated last week
- Supporting PyTorch models with the Google AI Edge TFLite runtime.☆555Updated this week
- Examples for using ONNX Runtime for machine learning inferencing.☆1,354Updated last week
- Use safetensors with ONNX 🤗☆54Updated last month
- VPTQ, A Flexible and Extreme low-bit quantization algorithm☆628Updated 3 weeks ago
- Local LLM Server with NPU Acceleration☆156Updated last week
- llama.cpp fork with additional SOTA quants and improved performance☆323Updated this week
- Inference Vision Transformer (ViT) in plain C/C++ with ggml☆266Updated last year