intel / intel-extension-for-pytorchLinks
A Python package for extending the official PyTorch that can easily obtain performance on Intel platform
☆2,010Updated this week
Alternatives and similar repositories for intel-extension-for-pytorch
Users that are interested in intel-extension-for-pytorch are comparing it to the libraries listed below
Sorting:
- SOTA low-bit LLM quantization (INT8/FP8/MXFP8/INT4/MXFP4/NVFP4) & sparsity; leading model compression techniques on PyTorch, TensorFlow, …☆2,581Updated this week
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆532Updated last week
- Intel® NPU Acceleration Library☆703Updated 9 months ago
- Intel® Extension for TensorFlow*☆349Updated 3 months ago
- Intel® AI Reference Models: contains Intel optimizations for running deep learning workloads on Intel® Xeon® Scalable processors and Inte…☆728Updated this week
- Neural Network Compression Framework for enhanced OpenVINO™ inference☆1,126Updated this week
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆433Updated this week
- ⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Pl…☆2,174Updated last year
- cudnn_frontend provides a c++ wrapper for the cudnn backend API and samples on how to use it☆681Updated last week
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,152Updated this week
- Reference implementations of MLPerf® inference benchmarks☆1,525Updated this week
- Olive: Simplify ML Model Finetuning, Conversion, Quantization, and Optimization for CPUs, GPUs and NPUs.☆2,249Updated this week
- The Torch-MLIR project aims to provide first class support from the PyTorch ecosystem to the MLIR ecosystem.☆1,742Updated last week
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,072Updated last year
- 🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimization…☆3,279Updated 3 weeks ago
- TorchBench is a collection of open source benchmarks used to evaluate PyTorch performance.☆1,012Updated this week
- A scalable inference server for models optimized with OpenVINO™☆823Updated last week
- Generative AI extensions for onnxruntime☆957Updated this week
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,944Updated this week
- ONNX Optimizer☆795Updated last week
- A machine learning compiler for GPUs, CPUs, and ML accelerators☆3,973Updated this week
- A unified library of SOTA model optimization techniques like quantization, pruning, distillation, speculative decoding, etc. It compresse…☆1,964Updated this week
- ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.☆420Updated last week
- Tools for easier OpenVINO development/debugging☆10Updated 6 months ago
- Representation and Reference Lowering of ONNX Models in MLIR Compiler Infrastructure☆973Updated this week
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackab…☆1,586Updated 2 weeks ago
- AMD Ryzen™ AI Software includes the tools and runtime libraries for optimizing and deploying AI inference on AMD Ryzen™ AI powered PCs.☆769Updated this week
- DLPrimitives/OpenCL out of tree backend for pytorch☆389Updated 2 months ago
- FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/☆1,525Updated last week
- PyTorch native quantization and sparsity for training and inference☆2,668Updated this week