🤗 Optimum Intel: Accelerate inference with Intel optimization tools
☆537Feb 23, 2026Updated last week
Alternatives and similar repositories for optimum-intel
Users that are interested in optimum-intel are comparing it to the libraries listed below
Sorting:
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆440Updated this week
- OpenVINO Tokenizers extension☆49Updated this week
- 🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimization…☆3,305Feb 9, 2026Updated 3 weeks ago
- A curated list of OpenVINO based AI projects☆185Jun 30, 2025Updated 8 months ago
- Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)☆207Feb 23, 2026Updated last week
- A Python package for extending the official PyTorch that can easily obtain performance on Intel platform☆2,012Feb 13, 2026Updated 2 weeks ago
- SOTA low-bit LLM quantization (INT8/FP8/MXFP8/INT4/MXFP4/NVFP4) & sparsity; leading model compression techniques on PyTorch, TensorFlow, …☆2,590Updated this week
- 📚 Jupyter notebook tutorials for OpenVINO™☆3,049Updated this week
- ⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Pl…☆2,175Oct 8, 2024Updated last year
- Neural Network Compression Framework for enhanced OpenVINO™ inference☆1,127Updated this week
- Large Language Model Text Generation Inference on Habana Gaudi☆34Mar 20, 2025Updated 11 months ago
- OpenVINO™ is an open source toolkit for optimizing and deploying AI inference☆9,790Updated this week
- With OpenVINO Test Drive, users can run large language models (LLMs) and models trained by Intel Geti on their devices, including AI PCs …☆37Dec 15, 2025Updated 2 months ago
- Tools for easier OpenVINO development/debugging☆10Jul 16, 2025Updated 7 months ago
- An innovative library for efficient LLM inference via low-bit quantization☆352Aug 30, 2024Updated last year
- Intel® AI Reference Models: contains Intel optimizations for running deep learning workloads on Intel® Xeon® Scalable processors and Inte…☆728Feb 11, 2026Updated 2 weeks ago
- A scalable inference server for models optimized with OpenVINO™☆833Updated this week
- OpenVINO Intel NPU Compiler☆83Feb 23, 2026Updated last week
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆65Jun 30, 2025Updated 8 months ago
- 🎯An accuracy-first, highly efficient quantization toolkit for LLMs, designed to minimize quality degradation across Weight-Only Quantiza…☆853Updated this week
- **ARCHIVED** Filesystem interface to 🤗 Hub☆59Apr 6, 2023Updated 2 years ago
- Efficient few-shot learning with Sentence Transformers☆2,688Dec 11, 2025Updated 2 months ago
- Accessible large language models via k-bit quantization for PyTorch.☆7,997Updated this week
- A pytorch quantization backend for optimum☆1,025Nov 21, 2025Updated 3 months ago
- Repository for OpenVINO's extra modules☆166Updated this week
- oneCCL Bindings for Pytorch* (deprecated)☆105Dec 31, 2025Updated 2 months ago
- Prune a model while finetuning or training.☆406Jun 21, 2022Updated 3 years ago
- Reference models for Intel(R) Gaudi(R) AI Accelerator☆170Jan 8, 2026Updated last month
- Intel® NPU (Neural Processing Unit) Driver☆379Feb 19, 2026Updated last week
- Software Development Kit (SDK) for the Geti™ platform for Computer Vision AI model training.☆123Feb 11, 2026Updated 2 weeks ago
- Hugging Face Inference Toolkit used to serve transformers, sentence-transformers, and diffusers models.☆91Jan 9, 2026Updated last month
- Large Language Model Text Generation Inference☆10,788Jan 8, 2026Updated last month
- 🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of O…☆329Sep 25, 2025Updated 5 months ago
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,095Jun 30, 2025Updated 8 months ago
- 🤗 Evaluate: A library for easily evaluating machine learning models and datasets.☆2,419Jan 20, 2026Updated last month
- Accelerate LLM with low-bit (FP4 / INT4 / FP8 / INT8) optimizations using ipex-llm☆169Apr 29, 2025Updated 10 months ago
- Simple, safe way to store and distribute tensors☆3,645Updated this week
- A notebook running TensorRT's StableDiffusion demo on Google Colaboratory☆18Feb 1, 2023Updated 3 years ago
- GenAI components at micro-service level; GenAI service composer to create mega-service☆195Feb 12, 2026Updated 2 weeks ago