intel / intel-xai-toolsLinks
Explainable AI Tooling (XAI). XAI is used to discover and explain a model's prediction in a way that is interpretable to the user. Relevant information in the dataset, feature-set, and model's algorithms are exposed.
☆39Updated 4 months ago
Alternatives and similar repositories for intel-xai-tools
Users that are interested in intel-xai-tools are comparing it to the libraries listed below
Sorting:
- Intel® AI Reference Models: contains Intel optimizations for running deep learning workloads on Intel® Xeon® Scalable processors and Inte…☆719Updated last month
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆489Updated this week
- Reference models for Intel(R) Gaudi(R) AI Accelerator☆167Updated last week
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆62Updated 2 months ago
- Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)☆194Updated this week
- oneCCL Bindings for Pytorch*☆102Updated last month
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆13Updated last week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆83Updated this week
- MAD (Model Automation and Dashboarding)☆24Updated this week
- Tutorials for running models on First-gen Gaudi and Gaudi2 for Training and Inference. The source files for the tutorials on https://dev…☆61Updated last week
- Computation using data flow graphs for scalable machine learning☆68Updated this week
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆334Updated this week
- 🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of O…☆315Updated this week
- Large Language Model Text Generation Inference on Habana Gaudi☆34Updated 5 months ago
- Issues related to MLPerf™ Inference policies, including rules and suggested changes☆64Updated this week
- A scalable inference server for models optimized with OpenVINO™☆759Updated this week
- ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.☆381Updated this week
- Issues related to MLPerf™ training policies, including rules and suggested changes☆95Updated 3 weeks ago
- Neural Network Compression Framework for enhanced OpenVINO™ inference☆1,076Updated this week
- Measure and optimize the energy consumption of your AI applications!☆291Updated last month
- Examples for using ONNX Runtime for model training.☆345Updated 10 months ago
- ☆120Updated last year
- Home for OctoML PyTorch Profiler☆114Updated 2 years ago
- A validation and profiling tool for AI infrastructure☆332Updated this week
- OpenVINO Tokenizers extension☆40Updated this week
- Intel® Extension for TensorFlow*☆346Updated 5 months ago
- Intel® Tensor Processing Primitives extension for Pytorch*☆17Updated this week
- ☆124Updated this week
- ☆74Updated 5 months ago
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆490Updated last week