Generative AI extensions for onnxruntime
☆962Feb 24, 2026Updated last week
Alternatives and similar repositories for onnxruntime-genai
Users that are interested in onnxruntime-genai are comparing it to the libraries listed below
Sorting:
- Olive: Simplify ML Model Finetuning, Conversion, Quantization, and Optimization for CPUs, GPUs and NPUs.☆2,255Updated this week
- onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtime☆443Feb 23, 2026Updated last week
- This is a Phi Family of SLMs book for getting started with Phi Models. Phi a family of open sourced AI models developed by Microsoft. Phi…☆3,687Updated this week
- ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.☆429Updated this week
- ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator☆19,389Updated this week
- Examples for using ONNX Runtime for machine learning inferencing.☆1,610Feb 24, 2026Updated last week
- On-device AI across mobile, embedded and edge for PyTorch☆4,312Updated this week
- llm deploy project based onnx.☆50Oct 9, 2024Updated last year
- 🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimization…☆3,305Feb 9, 2026Updated 3 weeks ago
- No-code CLI designed for accelerating ONNX workflows☆227Feb 19, 2026Updated last week
- A general 2-8 bits quantization toolbox with GPTQ/AWQ/HQQ/VPTQ, and export to onnx/onnx-runtime easily.☆184Apr 2, 2025Updated 11 months ago
- llm-export can export llm model to onnx.☆344Oct 24, 2025Updated 4 months ago
- Support PyTorch model conversion with LiteRT.☆944Updated this week
- SOTA low-bit LLM quantization (INT8/FP8/MXFP8/INT4/MXFP4/NVFP4) & sparsity; leading model compression techniques on PyTorch, TensorFlow, …☆2,590Updated this week
- ☆1,027Jan 4, 2024Updated 2 years ago
- ⚠️DirectML is in maintenance mode ⚠️ DirectML is a high-performance, hardware-accelerated DirectX 12 library for machine learning. Direct…☆2,546Feb 20, 2026Updated last week
- OpenVINO Tokenizers extension☆49Updated this week
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆440Feb 24, 2026Updated last week
- ☆48Jun 2, 2024Updated last year
- ONNXMLTools enables conversion of models to ONNX☆1,142Feb 23, 2026Updated last week
- call rwkv v4/v5/v6/v7 raven/world/finch 1B5-14B rwkv.cpp using csharp cpu/gpu (support INT4,8,Float16,32)☆35Feb 21, 2025Updated last year
- TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizat…☆12,938Updated this week
- ONNX Optimizer☆797Feb 4, 2026Updated 3 weeks ago
- The Triton backend for the ONNX Runtime.☆173Updated this week
- ☆78Oct 8, 2024Updated last year
- A collection of pre-trained, state-of-the-art models in the ONNX format☆9,438Sep 16, 2025Updated 5 months ago
- Tensor library for machine learning☆14,152Updated this week
- Universal LLM Deployment Engine with ML Compilation☆22,082Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆71,234Updated this week
- lightweight, standalone C++ inference engine for Google's Gemma models.☆6,742Updated this week
- Large Language Model Onnx Inference Framework☆35Nov 25, 2025Updated 3 months ago
- Examples for using ONNX Runtime for model training.☆363Oct 23, 2024Updated last year
- Integrate cutting-edge LLM technology quickly and easily into your apps☆27,341Updated this week
- Simplify your onnx model☆4,297Updated this week
- Official inference framework for 1-bit LLMs☆28,640Feb 3, 2026Updated last month
- A Toolkit to Help Optimize Onnx Model☆442Updated this week
- Low-bit LLM inference on CPU/NPU with lookup table☆924Jun 5, 2025Updated 8 months ago
- LLM inference in C/C++☆96,322Updated this week
- Diffusion model(SD,Flux,Wan,Qwen Image,Z-Image,...) inference in pure C/C++☆5,490Updated this week