microsoft / OliveLinks
Olive: Simplify ML Model Finetuning, Conversion, Quantization, and Optimization for CPUs, GPUs and NPUs.
☆2,246Updated last week
Alternatives and similar repositories for Olive
Users that are interested in Olive are comparing it to the libraries listed below
Sorting:
- Generative AI extensions for onnxruntime☆953Updated this week
- SOTA low-bit LLM quantization (INT8/FP8/MXFP8/INT4/MXFP4/NVFP4) & sparsity; leading model compression techniques on PyTorch, TensorFlow, …☆2,577Updated last week
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆532Updated this week
- ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.☆420Updated this week
- 🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimization…☆3,277Updated 3 weeks ago
- A Python package for extending the official PyTorch that can easily obtain performance on Intel platform☆2,010Updated this week
- onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtime☆441Updated this week
- ⚠️DirectML is in maintenance mode ⚠️ DirectML is a high-performance, hardware-accelerated DirectX 12 library for machine learning. Direct…☆2,545Updated this week
- Examples for using ONNX Runtime for model training.☆361Updated last year
- PyTorch native quantization and sparsity for training and inference☆2,657Updated last week
- ONNXMLTools enables conversion of models to ONNX☆1,140Updated last week
- ⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Pl…☆2,174Updated last year
- ONNX Optimizer☆795Updated this week
- Examples for using ONNX Runtime for machine learning inferencing.☆1,601Updated this week
- Qualcomm® AI Hub Models is our collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.) an…☆915Updated last week
- A pytorch quantization backend for optimum☆1,022Updated 2 months ago
- ☆1,029Updated 2 years ago
- High-efficiency floating-point neural network inference operators for mobile, server, and Web☆2,245Updated this week
- On-device AI across mobile, embedded and edge for PyTorch☆4,226Updated this week
- AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.☆2,552Updated this week
- Simple, safe way to store and distribute tensors☆3,614Updated this week
- Support PyTorch model conversion with LiteRT.☆930Updated this week
- cudnn_frontend provides a c++ wrapper for the cudnn backend API and samples on how to use it☆682Updated last week
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,092Updated 7 months ago
- A machine learning compiler for GPUs, CPUs, and ML accelerators☆3,956Updated this week
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆428Updated this week
- Common utilities for ONNX converters☆293Updated last month
- A unified library of SOTA model optimization techniques like quantization, pruning, distillation, speculative decoding, etc. It compresse…☆1,925Updated this week
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,132Updated this week
- AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (N…☆4,702Updated 3 weeks ago