intel / intel-xai-toolsLinks
Explainable AI Tooling (XAI). XAI is used to discover and explain a model's prediction in a way that is interpretable to the user. Relevant information in the dataset, feature-set, and model's algorithms are exposed.
☆39Updated last month
Alternatives and similar repositories for intel-xai-tools
Users that are interested in intel-xai-tools are comparing it to the libraries listed below
Sorting:
- Intel® AI Reference Models: contains Intel optimizations for running deep learning workloads on Intel® Xeon® Scalable processors and Inte…☆718Updated this week
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆503Updated this week
- Reference models for Intel(R) Gaudi(R) AI Accelerator☆165Updated last month
- Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)☆199Updated this week
- Tutorials for running models on First-gen Gaudi and Gaudi2 for Training and Inference. The source files for the tutorials on https://dev…☆62Updated last month
- A high-throughput and memory-efficient inference and serving engine for LLMs☆84Updated this week
- oneCCL Bindings for Pytorch*☆102Updated 2 months ago
- Examples for using ONNX Runtime for model training.☆352Updated last year
- 🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of O…☆317Updated last month
- Reference implementations of MLPerf® inference benchmarks☆1,478Updated this week
- Home for OctoML PyTorch Profiler☆114Updated 2 years ago
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆495Updated this week
- ML model training for edge devices☆167Updated 2 years ago
- Large Language Model Text Generation Inference on Habana Gaudi☆34Updated 7 months ago
- ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.☆404Updated this week
- ☆61Updated this week
- Computation using data flow graphs for scalable machine learning☆68Updated this week
- The Triton backend for the PyTorch TorchScript models.☆163Updated this week
- Common utilities for ONNX converters☆283Updated last month
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆216Updated this week
- OpenAI Triton backend for Intel® GPUs☆215Updated this week
- An innovative library for efficient LLM inference via low-bit quantization☆349Updated last year
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆63Updated 4 months ago
- Provides end-to-end model development pipelines for LLMs and Multimodal models that can be launched on-prem or cloud-native.☆508Updated 6 months ago
- Measure and optimize the energy consumption of your AI applications!☆307Updated last week
- Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.☆213Updated 6 months ago
- Issues related to MLPerf™ training policies, including rules and suggested changes☆95Updated last month
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆365Updated this week
- The Triton backend for the ONNX Runtime.☆163Updated 3 weeks ago
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind…☆161Updated last month