intel / intel-extension-for-pytorch
A Python package for extending the official PyTorch that can easily obtain performance on Intel platform
☆1,845Updated this week
Alternatives and similar repositories for intel-extension-for-pytorch
Users that are interested in intel-extension-for-pytorch are comparing it to the libraries listed below
Sorting:
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆464Updated this week
- SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX R…☆2,402Updated this week
- Intel® Extension for TensorFlow*☆338Updated last month
- Intel® NPU Acceleration Library☆671Updated 3 weeks ago
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆274Updated this week
- Neural Network Compression Framework for enhanced OpenVINO™ inference☆1,009Updated this week
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper, Ada and Bla…☆2,400Updated this week
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,745Updated this week
- A scalable inference server for models optimized with OpenVINO™☆723Updated this week
- Tools for easier OpenVINO development/debugging☆9Updated last month
- ⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Pl…☆2,171Updated 7 months ago
- cudnn_frontend provides a c++ wrapper for the cudnn backend API and samples on how to use it☆557Updated last month
- TorchBench is a collection of open source benchmarks used to evaluate PyTorch performance.☆940Updated this week
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,043Updated last year
- OpenVINO™ is an open source toolkit for optimizing and deploying AI inference☆8,269Updated this week
- The Torch-MLIR project aims to provide first class support from the PyTorch ecosystem to the MLIR ecosystem.☆1,529Updated this week
- Intel® AI Reference Models: contains Intel optimizations for running deep learning workloads on Intel® Xeon® Scalable processors and Inte…☆714Updated this week
- Representation and Reference Lowering of ONNX Models in MLIR Compiler Infrastructure☆853Updated last week
- DirectML is a high-performance, hardware-accelerated DirectX 12 library for machine learning. DirectML provides GPU acceleration for comm…☆2,446Updated 3 weeks ago
- An open-source efficient deep learning framework/compiler, written in python.☆698Updated 2 months ago
- OpenAI Triton backend for Intel® GPUs☆184Updated this week
- Transformer related optimization, including BERT, GPT☆6,152Updated last year
- PyTorch extensions for high performance and large scale training.☆3,313Updated 2 weeks ago
- FlashInfer: Kernel Library for LLM Serving☆2,815Updated this week
- common in-memory tensor structure☆985Updated this week
- Olive: Simplify ML Model Finetuning, Conversion, Quantization, and Optimization for CPUs, GPUs and NPUs.☆1,893Updated this week
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackab…☆1,566Updated last year
- A machine learning compiler for GPUs, CPUs, and ML accelerators☆3,147Updated this week
- A retargetable MLIR-based machine learning compiler and runtime toolkit.☆3,132Updated this week
- oneAPI Deep Neural Network Library (oneDNN)☆3,790Updated this week