intel / intel-xai-tools
Explainable AI Tooling (XAI). XAI is used to discover and explain a model's prediction in a way that is interpretable to the user. Relevant information in the dataset, feature-set, and model's algorithms are exposed.
☆37Updated last month
Alternatives and similar repositories for intel-xai-tools:
Users that are interested in intel-xai-tools are comparing it to the libraries listed below
- Tutorials for running models on First-gen Gaudi and Gaudi2 for Training and Inference. The source files for the tutorials on https://dev…☆59Updated last week
- Large Language Model Text Generation Inference on Habana Gaudi☆33Updated last month
- Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)☆186Updated this week
- Reference models for Intel(R) Gaudi(R) AI Accelerator☆162Updated last week
- oneCCL Bindings for Pytorch*☆95Updated 2 weeks ago
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆62Updated 2 months ago
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆462Updated this week
- 🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of O…☆299Updated this week
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆13Updated 2 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆69Updated this week
- Run cloud native workloads on NVIDIA GPUs☆168Updated last week
- Libraries and tools to support Transfer Learning☆19Updated last week
- ☆18Updated this week
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆126Updated last week
- Issues related to MLPerf™ Inference policies, including rules and suggested changes☆62Updated 2 months ago
- Issues related to MLPerf™ training policies, including rules and suggested changes☆95Updated 2 weeks ago
- Accelerate your Gen AI with NVIDIA NIM and NVIDIA AI Workbench☆159Updated last week
- OpenVINO™ integration with TensorFlow☆179Updated 10 months ago
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆269Updated this week
- MLPerf™ logging library☆36Updated 2 weeks ago
- This repository hosts code that supports the testing infrastructure for the PyTorch organization. For example, this repo hosts the logic …☆92Updated this week
- Examples for using ONNX Runtime for model training.☆333Updated 6 months ago
- Dev repo for power measurement for the MLPerf™ benchmarks☆21Updated last month
- Easily benchmark PyTorch model FLOPs, latency, throughput, allocated gpu memory and energy consumption☆102Updated last year
- MLCube® is a project that reduces friction for machine learning by ensuring that models are easily portable and reproducible.☆155Updated 7 months ago
- Intel® Tensor Processing Primitives extension for Pytorch*☆15Updated 2 weeks ago
- Setup and Installation Instructions for Habana binaries, docker image creation☆25Updated 2 months ago
- Machine Learning Agility (MLAgility) benchmark and benchmarking tools☆39Updated 2 months ago
- An innovative library for efficient LLM inference via low-bit quantization☆350Updated 8 months ago
- Home for OctoML PyTorch Profiler☆113Updated 2 years ago