intel / intel-npu-acceleration-libraryLinks
Intel® NPU Acceleration Library
☆680Updated 3 months ago
Alternatives and similar repositories for intel-npu-acceleration-library
Users that are interested in intel-npu-acceleration-library are comparing it to the libraries listed below
Sorting:
- AMD Ryzen™ AI Software includes the tools and runtime libraries for optimizing and deploying AI inference on AMD Ryzen™ AI powered PCs.☆583Updated last week
- A Python package for extending the official PyTorch that can easily obtain performance on Intel platform☆1,921Updated last week
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆316Updated this week
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆481Updated this week
- Intel® NPU (Neural Processing Unit) Driver☆294Updated last week
- OpenVINO Intel NPU Compiler☆62Updated last week
- Generative AI extensions for onnxruntime☆783Updated this week
- OpenAI Triton backend for Intel® GPUs☆197Updated this week
- Intel® Extension for TensorFlow*☆342Updated 4 months ago
- cudnn_frontend provides a c++ wrapper for the cudnn backend API and samples on how to use it☆598Updated 3 weeks ago
- ☆119Updated last week
- Composable Kernel: Performance Portable Programming Model for Machine Learning Tensor Operators☆443Updated this week
- Low-bit LLM inference on CPU/NPU with lookup table☆836Updated 2 months ago
- A collection of examples for the ROCm software stack☆230Updated this week
- An innovative library for efficient LLM inference via low-bit quantization☆349Updated 11 months ago
- A curated list of OpenVINO based AI projects☆146Updated last month
- Tenstorrent TT-BUDA Repository☆315Updated 4 months ago
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆269Updated this week
- Samples for Intel® oneAPI Toolkits☆1,063Updated 3 weeks ago
- ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.☆369Updated this week
- ☆264Updated this week
- Olive: Simplify ML Model Finetuning, Conversion, Quantization, and Optimization for CPUs, GPUs and NPUs.☆2,036Updated this week
- HIPIFY: Convert CUDA to Portable C++ Code☆602Updated this week
- DLPrimitives/OpenCL out of tree backend for pytorch☆362Updated 11 months ago
- No-code CLI designed for accelerating ONNX workflows☆205Updated last month
- ☆115Updated this week
- Library for modelling performance costs of different Neural Network workloads on NPU devices☆34Updated last month
- ⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Pl…☆2,169Updated 9 months ago
- A unified library of state-of-the-art model optimization techniques like quantization, pruning, distillation, speculative decoding, etc. …☆1,093Updated this week
- SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX R…☆2,465Updated this week