microsoft / OliveLinks
Olive: Simplify ML Model Finetuning, Conversion, Quantization, and Optimization for CPUs, GPUs and NPUs.
☆2,246Updated last week
Alternatives and similar repositories for Olive
Users that are interested in Olive are comparing it to the libraries listed below
Sorting:
- Generative AI extensions for onnxruntime☆953Updated this week
- SOTA low-bit LLM quantization (INT8/FP8/MXFP8/INT4/MXFP4/NVFP4) & sparsity; leading model compression techniques on PyTorch, TensorFlow, …☆2,577Updated last week
- A Python package for extending the official PyTorch that can easily obtain performance on Intel platform☆2,008Updated 2 weeks ago
- onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtime☆441Updated this week
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆532Updated this week
- ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.☆420Updated this week
- 🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimization…☆3,277Updated 3 weeks ago
- Examples for using ONNX Runtime for model training.☆361Updated last year
- ⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Pl…☆2,174Updated last year
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆428Updated this week
- ONNXMLTools enables conversion of models to ONNX☆1,139Updated last week
- ONNX Optimizer☆795Updated last week
- Simple, safe way to store and distribute tensors☆3,614Updated this week
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,092Updated 7 months ago
- ⚠️DirectML is in maintenance mode ⚠️ DirectML is a high-performance, hardware-accelerated DirectX 12 library for machine learning. Direct…☆2,545Updated this week
- PyTriton is a Flask/FastAPI-like interface that simplifies Triton's deployment in Python environments.☆832Updated 5 months ago
- A pytorch quantization backend for optimum☆1,022Updated 2 months ago
- ☆1,029Updated 2 years ago
- PyTorch native quantization and sparsity for training and inference☆2,657Updated last week
- Intel® NPU Acceleration Library☆703Updated 9 months ago
- Common utilities for ONNX converters☆293Updated last month
- A unified library of SOTA model optimization techniques like quantization, pruning, distillation, speculative decoding, etc. It compresse…☆1,925Updated this week
- This repository contains tutorials and examples for Triton Inference Server☆815Updated last week
- Qualcomm® AI Hub Models is our collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.) an…☆909Updated last week
- 🎯An accuracy-first, highly efficient quantization toolkit for LLMs, designed to minimize quality degradation across Weight-Only Quantiza…☆839Updated this week
- Official implementation of Half-Quadratic Quantization (HQQ)☆912Updated last month
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆2,312Updated 8 months ago
- A machine learning compiler for GPUs, CPUs, and ML accelerators☆3,956Updated this week
- Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX☆2,514Updated 4 months ago
- High-efficiency floating-point neural network inference operators for mobile, server, and Web☆2,245Updated this week