A Python package for extending the official PyTorch that can easily obtain performance on Intel platform
☆2,011Mar 30, 2026Updated last month
Alternatives and similar repositories for intel-extension-for-pytorch
Users that are interested in intel-extension-for-pytorch are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Intel® Extension for TensorFlow*☆351Oct 29, 2025Updated 6 months ago
- ⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Pl…☆2,178Oct 8, 2024Updated last year
- SOTA low-bit LLM quantization (INT8/FP8/MXFP8/INT4/MXFP4/NVFP4) & sparsity; leading model compression techniques on PyTorch, TensorFlow, …☆2,628Updated this week
- oneCCL Bindings for Pytorch* (deprecated)☆104Dec 31, 2025Updated 4 months ago
- OpenAI Triton backend for Intel® GPUs☆249Updated this week
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- oneAPI Deep Neural Network Library (oneDNN)☆3,985Updated this week
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆580Updated this week
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆65Jun 30, 2025Updated 10 months ago
- ☆89Updated this week
- Intel® AI Reference Models: contains Intel optimizations for running deep learning workloads on Intel® Xeon® Scalable processors and Inte…☆731Feb 11, 2026Updated 2 months ago
- Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, DeepSeek, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V,…☆8,795Jan 28, 2026Updated 3 months ago
- oneAPI Collective Communications Library (oneCCL)☆264Apr 23, 2026Updated last week
- OpenVINO™ is an open source toolkit for optimizing and deploying AI inference☆10,162Updated this week
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,076Apr 17, 2024Updated 2 years ago
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Intel® NPU Acceleration Library☆710Apr 24, 2025Updated last year
- Development repository for the Triton language and compiler☆19,087Updated this week
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,966Updated this week
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆498Updated this week
- SYCL* Templates for Linear Algebra (SYCL*TLA) - SYCL based CUTLASS implementation for Intel GPUs☆72Updated this week
- Transformer related optimization, including BERT, GPT☆6,415Mar 27, 2024Updated 2 years ago
- ☆164Updated this week
- Accessible large language models via k-bit quantization for PyTorch.☆8,168Apr 20, 2026Updated 2 weeks ago
- Fast and memory-efficient exact attention☆23,628Updated this week
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- AI PC starter app for doing AI image creation, image stylizing, and chatbot on a PC powered by an Intel® Arc™ GPU.☆850Updated this week
- A Python package for extending the official PyTorch that can easily obtain performance on Intel platform☆47Dec 12, 2024Updated last year
- 🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimization…☆3,376Updated this week
- PyTorch extensions for high performance and large scale training.☆3,409Apr 26, 2025Updated last year
- ☆436Sep 18, 2025Updated 7 months ago
- Neural Network Compression Framework for enhanced OpenVINO™ inference☆1,156Updated this week
- Intel® Graphics Compute Runtime for oneAPI Level Zero and OpenCL™ Driver☆1,377Updated this week
- Profiling Tools Interfaces for GPU (PTI for GPU) is a set of Getting Started Documentation and Tools Library to start performance analysi…☆266Apr 27, 2026Updated last week
- FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/☆1,557Updated this week
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- A SOTA quantization algorithm for high-accuracy low-bit LLM inference, seamlessly optimized for CPU/XPU/CUDA, with multi-datatype support…☆1,068Updated this week
- PyTorch native quantization and sparsity for training and inference☆2,807Updated this week
- Olive: Simplify ML Model Finetuning, Conversion, Quantization, and Optimization for CPUs, GPUs and NPUs.☆2,305Updated this week
- CUDA Templates and Python DSLs for High-Performance Linear Algebra☆9,663Apr 25, 2026Updated last week
- ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator☆20,409Updated this week
- Intel staging area for llvm.org contribution. Home for Intel LLVM-based projects.☆1,458Updated this week
- The Torch-MLIR project aims to provide first class support from the PyTorch ecosystem to the MLIR ecosystem.☆1,796Apr 24, 2026Updated last week