intel / intel-xai-toolsLinks
Explainable AI Tooling (XAI). XAI is used to discover and explain a model's prediction in a way that is interpretable to the user. Relevant information in the dataset, feature-set, and model's algorithms are exposed.
☆38Updated 2 months ago
Alternatives and similar repositories for intel-xai-tools
Users that are interested in intel-xai-tools are comparing it to the libraries listed below
Sorting:
- Intel® AI Reference Models: contains Intel optimizations for running deep learning workloads on Intel® Xeon® Scalable processors and Inte…☆720Updated last week
- Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)☆201Updated this week
- Reference models for Intel(R) Gaudi(R) AI Accelerator☆167Updated 2 months ago
- Tutorials for running models on First-gen Gaudi and Gaudi2 for Training and Inference. The source files for the tutorials on https://dev…☆62Updated 2 months ago
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆512Updated this week
- Examples for using ONNX Runtime for model training.☆354Updated last year
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆14Updated 2 months ago
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆497Updated this week
- Large Language Model Text Generation Inference on Habana Gaudi☆34Updated 8 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆85Updated this week
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆63Updated 4 months ago
- Qualcomm Cloud AI SDK (Platform and Apps) enable high performance deep learning inference on Qualcomm Cloud AI platforms delivering high …☆68Updated 3 months ago
- 🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of O…☆318Updated last month
- Provides end-to-end model development pipelines for LLMs and Multimodal models that can be launched on-prem or cloud-native.☆509Updated 7 months ago
- oneCCL Bindings for Pytorch* (deprecated)☆102Updated 2 weeks ago
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆374Updated this week
- Common utilities for ONNX converters☆284Updated 2 months ago
- Reference implementations of MLPerf® inference benchmarks☆1,490Updated this week
- Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.☆213Updated 7 months ago
- A Fusion Code Generator for NVIDIA GPUs (commonly known as "nvFuser")☆362Updated this week
- ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.☆409Updated this week
- A profiling and performance analysis tool for machine learning☆448Updated this week
- Issues related to MLPerf™ training policies, including rules and suggested changes☆95Updated last month
- OpenVINO™ integration with TensorFlow☆178Updated last year
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆131Updated 2 months ago
- Home for OctoML PyTorch Profiler☆114Updated 2 years ago
- The Triton backend for the ONNX Runtime.☆167Updated last week
- Intel® End-to-End AI Optimization Kit☆31Updated last year
- ☆119Updated last week
- Dev repo for power measurement for the MLPerf™ benchmarks☆25Updated 2 months ago